Test Report: KVM_Linux_crio 19883

                    
                      121f0c56d9928f50a4014e71c8f2076bb23ebfa1:2024-10-30:36875
                    
                

Test fail (31/320)

Order failed test Duration
36 TestAddons/parallel/Ingress 154.38
38 TestAddons/parallel/MetricsServer 364.17
47 TestAddons/StoppedEnableDisable 154.38
166 TestMultiControlPlane/serial/StopSecondaryNode 141.66
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.29
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.29
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 277.41
173 TestMultiControlPlane/serial/StopCluster 158.22
233 TestMultiNode/serial/RestartKeepsNodes 328.84
235 TestMultiNode/serial/StopMultiNode 145.22
242 TestPreload 240.15
250 TestKubernetesUpgrade 432.16
322 TestStartStop/group/old-k8s-version/serial/FirstStart 294.96
349 TestStartStop/group/no-preload/serial/Stop 139.13
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.93
353 TestStartStop/group/embed-certs/serial/Stop 139.13
354 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 119.17
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
364 TestStartStop/group/old-k8s-version/serial/SecondStart 724.85
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.32
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.3
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.27
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.51
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 466.29
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 373.22
371 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 327.02
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 118.97
x
+
TestAddons/parallel/Ingress (154.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-819803 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-819803 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-819803 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00547001s
I1030 18:26:09.617189  389144 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-819803 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.331621046s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-819803 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.211
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-819803 -n addons-819803
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 logs -n 25: (1.272039012s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| delete  | -p download-only-293078                                                                     | download-only-293078 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| delete  | -p download-only-765166                                                                     | download-only-765166 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| delete  | -p download-only-293078                                                                     | download-only-293078 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-605542 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC |                     |
	|         | binary-mirror-605542                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43099                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-605542                                                                     | binary-mirror-605542 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC |                     |
	|         | addons-819803                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC |                     |
	|         | addons-819803                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-819803 --wait=true                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:25 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | -p addons-819803                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:26 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-819803 ip                                                                            | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:26 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:26 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-819803 ssh curl -s                                                                   | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-819803 ssh cat                                                                       | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:26 UTC |
	|         | /opt/local-path-provisioner/pvc-bc29ddce-63c6-4328-8e8f-fb3484c4de83_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:27 UTC | 30 Oct 24 18:27 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-819803 ip                                                                            | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:28 UTC | 30 Oct 24 18:28 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:21:46
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:21:46.377146  389930 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:21:46.377251  389930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:21:46.377259  389930 out.go:358] Setting ErrFile to fd 2...
	I1030 18:21:46.377263  389930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:21:46.377433  389930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:21:46.378040  389930 out.go:352] Setting JSON to false
	I1030 18:21:46.378963  389930 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7449,"bootTime":1730305057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:21:46.379079  389930 start.go:139] virtualization: kvm guest
	I1030 18:21:46.381456  389930 out.go:177] * [addons-819803] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:21:46.382850  389930 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:21:46.382858  389930 notify.go:220] Checking for updates...
	I1030 18:21:46.385485  389930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:21:46.387091  389930 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:21:46.388369  389930 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:21:46.389574  389930 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:21:46.390796  389930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:21:46.392083  389930 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:21:46.423263  389930 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 18:21:46.424520  389930 start.go:297] selected driver: kvm2
	I1030 18:21:46.424533  389930 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:21:46.424547  389930 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:21:46.425307  389930 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:21:46.425405  389930 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:21:46.439927  389930 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:21:46.439984  389930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:21:46.440231  389930 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:21:46.440268  389930 cni.go:84] Creating CNI manager for ""
	I1030 18:21:46.440323  389930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:21:46.440334  389930 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 18:21:46.440388  389930 start.go:340] cluster config:
	{Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:21:46.440499  389930 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:21:46.442337  389930 out.go:177] * Starting "addons-819803" primary control-plane node in "addons-819803" cluster
	I1030 18:21:46.443613  389930 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:21:46.443648  389930 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:21:46.443660  389930 cache.go:56] Caching tarball of preloaded images
	I1030 18:21:46.443734  389930 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:21:46.443745  389930 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:21:46.444053  389930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/config.json ...
	I1030 18:21:46.444078  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/config.json: {Name:mk55690a6762df711e62dd40075acaa4a8fe5327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:21:46.444222  389930 start.go:360] acquireMachinesLock for addons-819803: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:21:46.444287  389930 start.go:364] duration metric: took 48.42µs to acquireMachinesLock for "addons-819803"
	I1030 18:21:46.444311  389930 start.go:93] Provisioning new machine with config: &{Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:21:46.444381  389930 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 18:21:46.446035  389930 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1030 18:21:46.446177  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:21:46.446229  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:21:46.460089  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I1030 18:21:46.460588  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:21:46.461261  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:21:46.461290  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:21:46.461637  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:21:46.461807  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:21:46.462004  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:21:46.462146  389930 start.go:159] libmachine.API.Create for "addons-819803" (driver="kvm2")
	I1030 18:21:46.462174  389930 client.go:168] LocalClient.Create starting
	I1030 18:21:46.462210  389930 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:21:46.523366  389930 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:21:46.675982  389930 main.go:141] libmachine: Running pre-create checks...
	I1030 18:21:46.676009  389930 main.go:141] libmachine: (addons-819803) Calling .PreCreateCheck
	I1030 18:21:46.676528  389930 main.go:141] libmachine: (addons-819803) Calling .GetConfigRaw
	I1030 18:21:46.677026  389930 main.go:141] libmachine: Creating machine...
	I1030 18:21:46.677042  389930 main.go:141] libmachine: (addons-819803) Calling .Create
	I1030 18:21:46.677217  389930 main.go:141] libmachine: (addons-819803) Creating KVM machine...
	I1030 18:21:46.678312  389930 main.go:141] libmachine: (addons-819803) DBG | found existing default KVM network
	I1030 18:21:46.679140  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:46.678984  389952 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I1030 18:21:46.679208  389930 main.go:141] libmachine: (addons-819803) DBG | created network xml: 
	I1030 18:21:46.679227  389930 main.go:141] libmachine: (addons-819803) DBG | <network>
	I1030 18:21:46.679237  389930 main.go:141] libmachine: (addons-819803) DBG |   <name>mk-addons-819803</name>
	I1030 18:21:46.679245  389930 main.go:141] libmachine: (addons-819803) DBG |   <dns enable='no'/>
	I1030 18:21:46.679256  389930 main.go:141] libmachine: (addons-819803) DBG |   
	I1030 18:21:46.679265  389930 main.go:141] libmachine: (addons-819803) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1030 18:21:46.679277  389930 main.go:141] libmachine: (addons-819803) DBG |     <dhcp>
	I1030 18:21:46.679286  389930 main.go:141] libmachine: (addons-819803) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1030 18:21:46.679309  389930 main.go:141] libmachine: (addons-819803) DBG |     </dhcp>
	I1030 18:21:46.679321  389930 main.go:141] libmachine: (addons-819803) DBG |   </ip>
	I1030 18:21:46.679327  389930 main.go:141] libmachine: (addons-819803) DBG |   
	I1030 18:21:46.679332  389930 main.go:141] libmachine: (addons-819803) DBG | </network>
	I1030 18:21:46.679338  389930 main.go:141] libmachine: (addons-819803) DBG | 
	I1030 18:21:46.685075  389930 main.go:141] libmachine: (addons-819803) DBG | trying to create private KVM network mk-addons-819803 192.168.39.0/24...
	I1030 18:21:46.748185  389930 main.go:141] libmachine: (addons-819803) DBG | private KVM network mk-addons-819803 192.168.39.0/24 created
	I1030 18:21:46.748222  389930 main.go:141] libmachine: (addons-819803) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803 ...
	I1030 18:21:46.748257  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:46.748149  389952 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:21:46.748282  389930 main.go:141] libmachine: (addons-819803) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:21:46.748314  389930 main.go:141] libmachine: (addons-819803) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:21:47.042122  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:47.041994  389952 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa...
	I1030 18:21:47.276920  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:47.276746  389952 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/addons-819803.rawdisk...
	I1030 18:21:47.276957  389930 main.go:141] libmachine: (addons-819803) DBG | Writing magic tar header
	I1030 18:21:47.276966  389930 main.go:141] libmachine: (addons-819803) DBG | Writing SSH key tar header
	I1030 18:21:47.276973  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:47.276871  389952 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803 ...
	I1030 18:21:47.276986  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803
	I1030 18:21:47.277038  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:21:47.277059  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803 (perms=drwx------)
	I1030 18:21:47.277066  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:21:47.277077  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:21:47.277085  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:21:47.277111  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:21:47.277121  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home
	I1030 18:21:47.277129  389930 main.go:141] libmachine: (addons-819803) DBG | Skipping /home - not owner
	I1030 18:21:47.277159  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:21:47.277181  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:21:47.277224  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:21:47.277251  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:21:47.277271  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:21:47.277290  389930 main.go:141] libmachine: (addons-819803) Creating domain...
	I1030 18:21:47.277973  389930 main.go:141] libmachine: (addons-819803) define libvirt domain using xml: 
	I1030 18:21:47.277991  389930 main.go:141] libmachine: (addons-819803) <domain type='kvm'>
	I1030 18:21:47.278000  389930 main.go:141] libmachine: (addons-819803)   <name>addons-819803</name>
	I1030 18:21:47.278008  389930 main.go:141] libmachine: (addons-819803)   <memory unit='MiB'>4000</memory>
	I1030 18:21:47.278029  389930 main.go:141] libmachine: (addons-819803)   <vcpu>2</vcpu>
	I1030 18:21:47.278040  389930 main.go:141] libmachine: (addons-819803)   <features>
	I1030 18:21:47.278049  389930 main.go:141] libmachine: (addons-819803)     <acpi/>
	I1030 18:21:47.278055  389930 main.go:141] libmachine: (addons-819803)     <apic/>
	I1030 18:21:47.278078  389930 main.go:141] libmachine: (addons-819803)     <pae/>
	I1030 18:21:47.278093  389930 main.go:141] libmachine: (addons-819803)     
	I1030 18:21:47.278126  389930 main.go:141] libmachine: (addons-819803)   </features>
	I1030 18:21:47.278145  389930 main.go:141] libmachine: (addons-819803)   <cpu mode='host-passthrough'>
	I1030 18:21:47.278152  389930 main.go:141] libmachine: (addons-819803)   
	I1030 18:21:47.278171  389930 main.go:141] libmachine: (addons-819803)   </cpu>
	I1030 18:21:47.278180  389930 main.go:141] libmachine: (addons-819803)   <os>
	I1030 18:21:47.278185  389930 main.go:141] libmachine: (addons-819803)     <type>hvm</type>
	I1030 18:21:47.278191  389930 main.go:141] libmachine: (addons-819803)     <boot dev='cdrom'/>
	I1030 18:21:47.278196  389930 main.go:141] libmachine: (addons-819803)     <boot dev='hd'/>
	I1030 18:21:47.278202  389930 main.go:141] libmachine: (addons-819803)     <bootmenu enable='no'/>
	I1030 18:21:47.278206  389930 main.go:141] libmachine: (addons-819803)   </os>
	I1030 18:21:47.278213  389930 main.go:141] libmachine: (addons-819803)   <devices>
	I1030 18:21:47.278223  389930 main.go:141] libmachine: (addons-819803)     <disk type='file' device='cdrom'>
	I1030 18:21:47.278233  389930 main.go:141] libmachine: (addons-819803)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/boot2docker.iso'/>
	I1030 18:21:47.278242  389930 main.go:141] libmachine: (addons-819803)       <target dev='hdc' bus='scsi'/>
	I1030 18:21:47.278251  389930 main.go:141] libmachine: (addons-819803)       <readonly/>
	I1030 18:21:47.278258  389930 main.go:141] libmachine: (addons-819803)     </disk>
	I1030 18:21:47.278263  389930 main.go:141] libmachine: (addons-819803)     <disk type='file' device='disk'>
	I1030 18:21:47.278271  389930 main.go:141] libmachine: (addons-819803)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:21:47.278279  389930 main.go:141] libmachine: (addons-819803)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/addons-819803.rawdisk'/>
	I1030 18:21:47.278286  389930 main.go:141] libmachine: (addons-819803)       <target dev='hda' bus='virtio'/>
	I1030 18:21:47.278306  389930 main.go:141] libmachine: (addons-819803)     </disk>
	I1030 18:21:47.278323  389930 main.go:141] libmachine: (addons-819803)     <interface type='network'>
	I1030 18:21:47.278335  389930 main.go:141] libmachine: (addons-819803)       <source network='mk-addons-819803'/>
	I1030 18:21:47.278346  389930 main.go:141] libmachine: (addons-819803)       <model type='virtio'/>
	I1030 18:21:47.278358  389930 main.go:141] libmachine: (addons-819803)     </interface>
	I1030 18:21:47.278368  389930 main.go:141] libmachine: (addons-819803)     <interface type='network'>
	I1030 18:21:47.278378  389930 main.go:141] libmachine: (addons-819803)       <source network='default'/>
	I1030 18:21:47.278388  389930 main.go:141] libmachine: (addons-819803)       <model type='virtio'/>
	I1030 18:21:47.278401  389930 main.go:141] libmachine: (addons-819803)     </interface>
	I1030 18:21:47.278415  389930 main.go:141] libmachine: (addons-819803)     <serial type='pty'>
	I1030 18:21:47.278427  389930 main.go:141] libmachine: (addons-819803)       <target port='0'/>
	I1030 18:21:47.278437  389930 main.go:141] libmachine: (addons-819803)     </serial>
	I1030 18:21:47.278454  389930 main.go:141] libmachine: (addons-819803)     <console type='pty'>
	I1030 18:21:47.278475  389930 main.go:141] libmachine: (addons-819803)       <target type='serial' port='0'/>
	I1030 18:21:47.278511  389930 main.go:141] libmachine: (addons-819803)     </console>
	I1030 18:21:47.278530  389930 main.go:141] libmachine: (addons-819803)     <rng model='virtio'>
	I1030 18:21:47.278543  389930 main.go:141] libmachine: (addons-819803)       <backend model='random'>/dev/random</backend>
	I1030 18:21:47.278558  389930 main.go:141] libmachine: (addons-819803)     </rng>
	I1030 18:21:47.278568  389930 main.go:141] libmachine: (addons-819803)     
	I1030 18:21:47.278583  389930 main.go:141] libmachine: (addons-819803)     
	I1030 18:21:47.278595  389930 main.go:141] libmachine: (addons-819803)   </devices>
	I1030 18:21:47.278601  389930 main.go:141] libmachine: (addons-819803) </domain>
	I1030 18:21:47.278608  389930 main.go:141] libmachine: (addons-819803) 
	I1030 18:21:47.284437  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:22:34:18 in network default
	I1030 18:21:47.284918  389930 main.go:141] libmachine: (addons-819803) Ensuring networks are active...
	I1030 18:21:47.284933  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:47.285572  389930 main.go:141] libmachine: (addons-819803) Ensuring network default is active
	I1030 18:21:47.285846  389930 main.go:141] libmachine: (addons-819803) Ensuring network mk-addons-819803 is active
	I1030 18:21:47.286306  389930 main.go:141] libmachine: (addons-819803) Getting domain xml...
	I1030 18:21:47.286972  389930 main.go:141] libmachine: (addons-819803) Creating domain...
	I1030 18:21:48.671057  389930 main.go:141] libmachine: (addons-819803) Waiting to get IP...
	I1030 18:21:48.671853  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:48.672339  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:48.672421  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:48.672347  389952 retry.go:31] will retry after 291.069623ms: waiting for machine to come up
	I1030 18:21:48.965081  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:48.965507  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:48.965537  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:48.965457  389952 retry.go:31] will retry after 354.585457ms: waiting for machine to come up
	I1030 18:21:49.322206  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:49.322606  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:49.322635  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:49.322558  389952 retry.go:31] will retry after 482.031018ms: waiting for machine to come up
	I1030 18:21:49.805727  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:49.806155  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:49.806184  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:49.806114  389952 retry.go:31] will retry after 603.123075ms: waiting for machine to come up
	I1030 18:21:50.411008  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:50.411349  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:50.411371  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:50.411301  389952 retry.go:31] will retry after 466.752397ms: waiting for machine to come up
	I1030 18:21:50.880001  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:50.880397  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:50.880441  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:50.880351  389952 retry.go:31] will retry after 619.924687ms: waiting for machine to come up
	I1030 18:21:51.501985  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:51.502439  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:51.502502  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:51.502372  389952 retry.go:31] will retry after 1.045044225s: waiting for machine to come up
	I1030 18:21:52.549198  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:52.549616  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:52.549644  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:52.549572  389952 retry.go:31] will retry after 1.370089219s: waiting for machine to come up
	I1030 18:21:53.922267  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:53.922659  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:53.922681  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:53.922639  389952 retry.go:31] will retry after 1.236302299s: waiting for machine to come up
	I1030 18:21:55.161330  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:55.161760  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:55.161789  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:55.161723  389952 retry.go:31] will retry after 2.307993642s: waiting for machine to come up
	I1030 18:21:57.471490  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:57.471917  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:57.471952  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:57.471858  389952 retry.go:31] will retry after 2.168747245s: waiting for machine to come up
	I1030 18:21:59.643105  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:59.643461  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:59.643490  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:59.643409  389952 retry.go:31] will retry after 2.480578318s: waiting for machine to come up
	I1030 18:22:02.125197  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:02.125611  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:22:02.125639  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:22:02.125545  389952 retry.go:31] will retry after 2.851771618s: waiting for machine to come up
	I1030 18:22:04.980556  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:04.980952  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:22:04.980981  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:22:04.980894  389952 retry.go:31] will retry after 4.668600476s: waiting for machine to come up
	I1030 18:22:09.653442  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:09.653962  389930 main.go:141] libmachine: (addons-819803) Found IP for machine: 192.168.39.211
	I1030 18:22:09.653987  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has current primary IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:09.653993  389930 main.go:141] libmachine: (addons-819803) Reserving static IP address...
	I1030 18:22:09.654337  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find host DHCP lease matching {name: "addons-819803", mac: "52:54:00:c8:a4:df", ip: "192.168.39.211"} in network mk-addons-819803
	I1030 18:22:09.725392  389930 main.go:141] libmachine: (addons-819803) DBG | Getting to WaitForSSH function...
	I1030 18:22:09.725426  389930 main.go:141] libmachine: (addons-819803) Reserved static IP address: 192.168.39.211
	I1030 18:22:09.725439  389930 main.go:141] libmachine: (addons-819803) Waiting for SSH to be available...
	I1030 18:22:09.727953  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:09.728252  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803
	I1030 18:22:09.728281  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find defined IP address of network mk-addons-819803 interface with MAC address 52:54:00:c8:a4:df
	I1030 18:22:09.728397  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH client type: external
	I1030 18:22:09.728426  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa (-rw-------)
	I1030 18:22:09.728457  389930 main.go:141] libmachine: (addons-819803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:22:09.728471  389930 main.go:141] libmachine: (addons-819803) DBG | About to run SSH command:
	I1030 18:22:09.728483  389930 main.go:141] libmachine: (addons-819803) DBG | exit 0
	I1030 18:22:09.732017  389930 main.go:141] libmachine: (addons-819803) DBG | SSH cmd err, output: exit status 255: 
	I1030 18:22:09.732039  389930 main.go:141] libmachine: (addons-819803) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1030 18:22:09.732051  389930 main.go:141] libmachine: (addons-819803) DBG | command : exit 0
	I1030 18:22:09.732058  389930 main.go:141] libmachine: (addons-819803) DBG | err     : exit status 255
	I1030 18:22:09.732067  389930 main.go:141] libmachine: (addons-819803) DBG | output  : 
	I1030 18:22:12.732735  389930 main.go:141] libmachine: (addons-819803) DBG | Getting to WaitForSSH function...
	I1030 18:22:12.735243  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.735643  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:12.735669  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.735775  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH client type: external
	I1030 18:22:12.735800  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa (-rw-------)
	I1030 18:22:12.736246  389930 main.go:141] libmachine: (addons-819803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:22:12.736272  389930 main.go:141] libmachine: (addons-819803) DBG | About to run SSH command:
	I1030 18:22:12.736286  389930 main.go:141] libmachine: (addons-819803) DBG | exit 0
	I1030 18:22:12.858596  389930 main.go:141] libmachine: (addons-819803) DBG | SSH cmd err, output: <nil>: 
	I1030 18:22:12.858865  389930 main.go:141] libmachine: (addons-819803) KVM machine creation complete!
	I1030 18:22:12.859186  389930 main.go:141] libmachine: (addons-819803) Calling .GetConfigRaw
	I1030 18:22:12.859816  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:12.860040  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:12.860205  389930 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:22:12.860220  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:12.861368  389930 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:22:12.861383  389930 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:22:12.861388  389930 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:22:12.861393  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:12.863559  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.863931  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:12.863973  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.864089  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:12.864251  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.864381  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.864476  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:12.864579  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:12.864814  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:12.864828  389930 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:22:12.965675  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:22:12.965698  389930 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:22:12.965706  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:12.968089  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.968420  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:12.968449  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.968568  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:12.968771  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.968900  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.968996  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:12.969102  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:12.969320  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:12.969341  389930 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:22:13.071004  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:22:13.071095  389930 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:22:13.071108  389930 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:22:13.071119  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:22:13.071379  389930 buildroot.go:166] provisioning hostname "addons-819803"
	I1030 18:22:13.071421  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:22:13.071609  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.074178  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.074540  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.074570  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.074705  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.074900  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.075046  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.075164  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.075284  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.075492  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.075507  389930 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-819803 && echo "addons-819803" | sudo tee /etc/hostname
	I1030 18:22:13.187982  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-819803
	
	I1030 18:22:13.188031  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.190507  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.190890  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.190928  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.191100  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.191282  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.191452  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.191571  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.191715  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.191885  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.191899  389930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-819803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-819803/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-819803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:22:13.303237  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:22:13.303273  389930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:22:13.303312  389930 buildroot.go:174] setting up certificates
	I1030 18:22:13.303326  389930 provision.go:84] configureAuth start
	I1030 18:22:13.303340  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:22:13.303633  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:13.306026  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.306337  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.306357  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.306534  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.308382  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.308738  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.308756  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.308874  389930 provision.go:143] copyHostCerts
	I1030 18:22:13.308961  389930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:22:13.309139  389930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:22:13.309218  389930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:22:13.309285  389930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.addons-819803 san=[127.0.0.1 192.168.39.211 addons-819803 localhost minikube]
	I1030 18:22:13.496268  389930 provision.go:177] copyRemoteCerts
	I1030 18:22:13.496353  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:22:13.496390  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.499024  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.499309  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.499342  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.499476  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.499644  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.499817  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.499930  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:13.580725  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:22:13.604274  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:22:13.626842  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 18:22:13.649577  389930 provision.go:87] duration metric: took 346.237404ms to configureAuth
	I1030 18:22:13.649603  389930 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:22:13.649785  389930 config.go:182] Loaded profile config "addons-819803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:22:13.649870  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.652722  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.653054  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.653079  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.653250  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.653443  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.653587  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.653712  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.653876  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.654043  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.654058  389930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:22:13.873355  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:22:13.873380  389930 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:22:13.873388  389930 main.go:141] libmachine: (addons-819803) Calling .GetURL
	I1030 18:22:13.874717  389930 main.go:141] libmachine: (addons-819803) DBG | Using libvirt version 6000000
	I1030 18:22:13.876865  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.877164  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.877195  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.877373  389930 main.go:141] libmachine: Docker is up and running!
	I1030 18:22:13.877386  389930 main.go:141] libmachine: Reticulating splines...
	I1030 18:22:13.877394  389930 client.go:171] duration metric: took 27.415210037s to LocalClient.Create
	I1030 18:22:13.877420  389930 start.go:167] duration metric: took 27.415274417s to libmachine.API.Create "addons-819803"
	I1030 18:22:13.877434  389930 start.go:293] postStartSetup for "addons-819803" (driver="kvm2")
	I1030 18:22:13.877451  389930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:22:13.877473  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:13.877703  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:22:13.877732  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.879805  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.880115  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.880135  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.880303  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.880475  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.880648  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.880796  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:13.961195  389930 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:22:13.965134  389930 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:22:13.965159  389930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:22:13.965250  389930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:22:13.965278  389930 start.go:296] duration metric: took 87.833483ms for postStartSetup
	I1030 18:22:13.965332  389930 main.go:141] libmachine: (addons-819803) Calling .GetConfigRaw
	I1030 18:22:13.965897  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:13.968361  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.968649  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.968685  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.968910  389930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/config.json ...
	I1030 18:22:13.969086  389930 start.go:128] duration metric: took 27.524693623s to createHost
	I1030 18:22:13.969113  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.971111  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.971374  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.971401  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.971537  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.971729  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.971876  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.972026  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.972170  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.972335  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.972351  389930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:22:14.075274  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730312534.054770540
	
	I1030 18:22:14.075302  389930 fix.go:216] guest clock: 1730312534.054770540
	I1030 18:22:14.075310  389930 fix.go:229] Guest: 2024-10-30 18:22:14.05477054 +0000 UTC Remote: 2024-10-30 18:22:13.969098834 +0000 UTC m=+27.629342568 (delta=85.671706ms)
	I1030 18:22:14.075349  389930 fix.go:200] guest clock delta is within tolerance: 85.671706ms
	I1030 18:22:14.075355  389930 start.go:83] releasing machines lock for "addons-819803", held for 27.631058158s
	I1030 18:22:14.075375  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.075687  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:14.077973  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.078275  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:14.078307  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.078506  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.079025  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.079210  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.079317  389930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:22:14.079386  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:14.079433  389930 ssh_runner.go:195] Run: cat /version.json
	I1030 18:22:14.079459  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:14.081762  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.081780  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.082059  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:14.082087  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.082112  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:14.082133  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.082234  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:14.082396  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:14.082407  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:14.082600  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:14.082645  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:14.082789  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:14.082805  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:14.082918  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:14.183992  389930 ssh_runner.go:195] Run: systemctl --version
	I1030 18:22:14.189783  389930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:22:14.347846  389930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:22:14.353576  389930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:22:14.353651  389930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:22:14.372746  389930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:22:14.372775  389930 start.go:495] detecting cgroup driver to use...
	I1030 18:22:14.372850  389930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:22:14.392610  389930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:22:14.408830  389930 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:22:14.408885  389930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:22:14.423904  389930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:22:14.439462  389930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:22:14.581365  389930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:22:14.737305  389930 docker.go:233] disabling docker service ...
	I1030 18:22:14.737373  389930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:22:14.751615  389930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:22:14.764338  389930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:22:14.903828  389930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:22:15.034446  389930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:22:15.051942  389930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:22:15.069743  389930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:22:15.069811  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.080015  389930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:22:15.080076  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.090345  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.100700  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.110937  389930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:22:15.121495  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.131703  389930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.148129  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.158324  389930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:22:15.167752  389930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:22:15.167817  389930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:22:15.180379  389930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:22:15.189438  389930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:22:15.315918  389930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:22:15.406680  389930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:22:15.406771  389930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:22:15.412035  389930 start.go:563] Will wait 60s for crictl version
	I1030 18:22:15.412093  389930 ssh_runner.go:195] Run: which crictl
	I1030 18:22:15.415689  389930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:22:15.453281  389930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:22:15.453368  389930 ssh_runner.go:195] Run: crio --version
	I1030 18:22:15.481364  389930 ssh_runner.go:195] Run: crio --version
	I1030 18:22:15.510591  389930 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:22:15.511809  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:15.513933  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:15.514259  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:15.514292  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:15.514468  389930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:22:15.518335  389930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:22:15.530311  389930 kubeadm.go:883] updating cluster {Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:22:15.530433  389930 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:22:15.530476  389930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:22:15.561495  389930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 18:22:15.561560  389930 ssh_runner.go:195] Run: which lz4
	I1030 18:22:15.565386  389930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 18:22:15.569388  389930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 18:22:15.569422  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 18:22:16.815091  389930 crio.go:462] duration metric: took 1.249736286s to copy over tarball
	I1030 18:22:16.815165  389930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 18:22:18.895499  389930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.080282089s)
	I1030 18:22:18.895540  389930 crio.go:469] duration metric: took 2.080418147s to extract the tarball
	I1030 18:22:18.895550  389930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 18:22:18.934730  389930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:22:18.976819  389930 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:22:18.976846  389930 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:22:18.976854  389930 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1030 18:22:18.976961  389930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-819803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:22:18.977032  389930 ssh_runner.go:195] Run: crio config
	I1030 18:22:19.022630  389930 cni.go:84] Creating CNI manager for ""
	I1030 18:22:19.022657  389930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:22:19.022669  389930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:22:19.022692  389930 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-819803 NodeName:addons-819803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:22:19.022831  389930 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-819803"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:22:19.022894  389930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:22:19.033139  389930 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:22:19.033217  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 18:22:19.042777  389930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1030 18:22:19.059625  389930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:22:19.076398  389930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1030 18:22:19.093021  389930 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1030 18:22:19.096821  389930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:22:19.109239  389930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:22:19.241397  389930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:22:19.258667  389930 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803 for IP: 192.168.39.211
	I1030 18:22:19.258692  389930 certs.go:194] generating shared ca certs ...
	I1030 18:22:19.258759  389930 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.258916  389930 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:22:19.421313  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt ...
	I1030 18:22:19.421346  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt: {Name:mke1baa90fdf9d472688c9dce1a8cbdb9429180e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.421528  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key ...
	I1030 18:22:19.421545  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key: {Name:mk39960ca0f7a604b923049b394a9dd190b5c799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.421651  389930 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:22:19.800363  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt ...
	I1030 18:22:19.800397  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt: {Name:mka82047fcbc281c8dafed47ca47ee10ed435e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.800557  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key ...
	I1030 18:22:19.800568  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key: {Name:mke52f05795eacc13cef93d9a2f97c8ed2e5e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.800670  389930 certs.go:256] generating profile certs ...
	I1030 18:22:19.800748  389930 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.key
	I1030 18:22:19.800775  389930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt with IP's: []
	I1030 18:22:20.012094  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt ...
	I1030 18:22:20.012131  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: {Name:mk3e1026a414d0eb9a393c91985864dd02c29ca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.012311  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.key ...
	I1030 18:22:20.012322  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.key: {Name:mkd713bb8408388ccf35cdd7458b0248691df4e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.012388  389930 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d
	I1030 18:22:20.012407  389930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I1030 18:22:20.357052  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d ...
	I1030 18:22:20.357087  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d: {Name:mk90611d055450c0bc560328b67b2a4f1f1d82a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.357281  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d ...
	I1030 18:22:20.357300  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d: {Name:mkd39c4a1d2503f9ae6c571127f659970cb32617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.357397  389930 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt
	I1030 18:22:20.357473  389930 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key
	I1030 18:22:20.357523  389930 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key
	I1030 18:22:20.357541  389930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt with IP's: []
	I1030 18:22:20.480676  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt ...
	I1030 18:22:20.480707  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt: {Name:mk9b3153de6421c1963e00f41cbac3c9cb610755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.480887  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key ...
	I1030 18:22:20.480905  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key: {Name:mk49d90387976d01cb4e13a1c6fccd22f8262080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.481118  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:22:20.481155  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:22:20.481180  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:22:20.481204  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:22:20.481879  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:22:20.511951  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:22:20.537422  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:22:20.561582  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:22:20.585219  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1030 18:22:20.608875  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:22:20.632409  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:22:20.655836  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 18:22:20.679169  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:22:20.702822  389930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:22:20.719706  389930 ssh_runner.go:195] Run: openssl version
	I1030 18:22:20.725585  389930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:22:20.736618  389930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:22:20.741441  389930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:22:20.741549  389930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:22:20.748232  389930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:22:20.758668  389930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:22:20.762652  389930 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:22:20.762703  389930 kubeadm.go:392] StartCluster: {Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:22:20.762779  389930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:22:20.762821  389930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:22:20.796772  389930 cri.go:89] found id: ""
	I1030 18:22:20.796848  389930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 18:22:20.806869  389930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 18:22:20.816229  389930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 18:22:20.829281  389930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 18:22:20.829307  389930 kubeadm.go:157] found existing configuration files:
	
	I1030 18:22:20.829362  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 18:22:20.838753  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 18:22:20.838816  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 18:22:20.849914  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 18:22:20.862499  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 18:22:20.862581  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 18:22:20.874201  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 18:22:20.885993  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 18:22:20.886066  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 18:22:20.900423  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 18:22:20.909291  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 18:22:20.909350  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 18:22:20.918408  389930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 18:22:21.073373  389930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 18:22:31.121385  389930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 18:22:31.121483  389930 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 18:22:31.121581  389930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 18:22:31.121714  389930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 18:22:31.121794  389930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 18:22:31.121888  389930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 18:22:31.123478  389930 out.go:235]   - Generating certificates and keys ...
	I1030 18:22:31.123546  389930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 18:22:31.123600  389930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 18:22:31.123665  389930 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 18:22:31.123729  389930 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 18:22:31.123824  389930 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 18:22:31.123900  389930 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 18:22:31.123953  389930 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 18:22:31.124113  389930 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-819803 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1030 18:22:31.124190  389930 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 18:22:31.124330  389930 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-819803 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1030 18:22:31.124386  389930 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 18:22:31.124490  389930 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 18:22:31.124565  389930 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 18:22:31.124656  389930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 18:22:31.124737  389930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 18:22:31.124821  389930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 18:22:31.124868  389930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 18:22:31.124921  389930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 18:22:31.124988  389930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 18:22:31.125065  389930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 18:22:31.125139  389930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 18:22:31.126918  389930 out.go:235]   - Booting up control plane ...
	I1030 18:22:31.127007  389930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 18:22:31.127096  389930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 18:22:31.127158  389930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 18:22:31.127263  389930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 18:22:31.127350  389930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 18:22:31.127389  389930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 18:22:31.127503  389930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 18:22:31.127588  389930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 18:22:31.127645  389930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.206883ms
	I1030 18:22:31.127707  389930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 18:22:31.127760  389930 kubeadm.go:310] [api-check] The API server is healthy after 5.50200891s
	I1030 18:22:31.127873  389930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 18:22:31.128002  389930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 18:22:31.128071  389930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 18:22:31.128233  389930 kubeadm.go:310] [mark-control-plane] Marking the node addons-819803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 18:22:31.128290  389930 kubeadm.go:310] [bootstrap-token] Using token: g3koph.ks9ytu5c0ykdojb9
	I1030 18:22:31.129703  389930 out.go:235]   - Configuring RBAC rules ...
	I1030 18:22:31.129790  389930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 18:22:31.129859  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 18:22:31.130010  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 18:22:31.130186  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 18:22:31.130322  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 18:22:31.130396  389930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 18:22:31.130561  389930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 18:22:31.130604  389930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 18:22:31.130651  389930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 18:22:31.130664  389930 kubeadm.go:310] 
	I1030 18:22:31.130718  389930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 18:22:31.130724  389930 kubeadm.go:310] 
	I1030 18:22:31.130794  389930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 18:22:31.130800  389930 kubeadm.go:310] 
	I1030 18:22:31.130824  389930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 18:22:31.130879  389930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 18:22:31.130922  389930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 18:22:31.130927  389930 kubeadm.go:310] 
	I1030 18:22:31.130977  389930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 18:22:31.130986  389930 kubeadm.go:310] 
	I1030 18:22:31.131035  389930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 18:22:31.131044  389930 kubeadm.go:310] 
	I1030 18:22:31.131091  389930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 18:22:31.131153  389930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 18:22:31.131216  389930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 18:22:31.131223  389930 kubeadm.go:310] 
	I1030 18:22:31.131297  389930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 18:22:31.131366  389930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 18:22:31.131372  389930 kubeadm.go:310] 
	I1030 18:22:31.131442  389930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g3koph.ks9ytu5c0ykdojb9 \
	I1030 18:22:31.131544  389930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 18:22:31.131566  389930 kubeadm.go:310] 	--control-plane 
	I1030 18:22:31.131575  389930 kubeadm.go:310] 
	I1030 18:22:31.131703  389930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 18:22:31.131717  389930 kubeadm.go:310] 
	I1030 18:22:31.131835  389930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g3koph.ks9ytu5c0ykdojb9 \
	I1030 18:22:31.131981  389930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 18:22:31.131994  389930 cni.go:84] Creating CNI manager for ""
	I1030 18:22:31.132001  389930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:22:31.133334  389930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 18:22:31.134658  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 18:22:31.149352  389930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 18:22:31.174669  389930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 18:22:31.174794  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:31.174830  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-819803 minikube.k8s.io/updated_at=2024_10_30T18_22_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=addons-819803 minikube.k8s.io/primary=true
	I1030 18:22:31.204031  389930 ops.go:34] apiserver oom_adj: -16
	I1030 18:22:31.324787  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:31.825803  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:32.325287  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:32.825519  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:33.324911  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:33.824938  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:34.325322  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:34.825825  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:35.325760  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:35.433263  389930 kubeadm.go:1113] duration metric: took 4.258529499s to wait for elevateKubeSystemPrivileges
	I1030 18:22:35.433313  389930 kubeadm.go:394] duration metric: took 14.670614783s to StartCluster
	I1030 18:22:35.433340  389930 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:35.433493  389930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:22:35.434032  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:35.434256  389930 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 18:22:35.434301  389930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:22:35.434334  389930 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1030 18:22:35.434463  389930 addons.go:69] Setting yakd=true in profile "addons-819803"
	I1030 18:22:35.434481  389930 addons.go:69] Setting cloud-spanner=true in profile "addons-819803"
	I1030 18:22:35.434477  389930 addons.go:69] Setting metrics-server=true in profile "addons-819803"
	I1030 18:22:35.434502  389930 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-819803"
	I1030 18:22:35.434513  389930 addons.go:234] Setting addon yakd=true in "addons-819803"
	I1030 18:22:35.434518  389930 addons.go:234] Setting addon cloud-spanner=true in "addons-819803"
	I1030 18:22:35.434521  389930 addons.go:234] Setting addon metrics-server=true in "addons-819803"
	I1030 18:22:35.434524  389930 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-819803"
	I1030 18:22:35.434517  389930 addons.go:69] Setting ingress=true in profile "addons-819803"
	I1030 18:22:35.434548  389930 addons.go:234] Setting addon ingress=true in "addons-819803"
	I1030 18:22:35.434555  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434556  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434561  389930 addons.go:69] Setting ingress-dns=true in profile "addons-819803"
	I1030 18:22:35.434561  389930 config.go:182] Loaded profile config "addons-819803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:22:35.434564  389930 addons.go:69] Setting default-storageclass=true in profile "addons-819803"
	I1030 18:22:35.434572  389930 addons.go:234] Setting addon ingress-dns=true in "addons-819803"
	I1030 18:22:35.434583  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434582  389930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-819803"
	I1030 18:22:35.434599  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434604  389930 addons.go:69] Setting storage-provisioner=true in profile "addons-819803"
	I1030 18:22:35.434615  389930 addons.go:234] Setting addon storage-provisioner=true in "addons-819803"
	I1030 18:22:35.434636  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434556  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434693  389930 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-819803"
	I1030 18:22:35.434706  389930 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-819803"
	I1030 18:22:35.434968  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435002  389930 addons.go:69] Setting inspektor-gadget=true in profile "addons-819803"
	I1030 18:22:35.435008  389930 addons.go:69] Setting volcano=true in profile "addons-819803"
	I1030 18:22:35.435016  389930 addons.go:234] Setting addon inspektor-gadget=true in "addons-819803"
	I1030 18:22:35.435022  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435028  389930 addons.go:69] Setting volumesnapshots=true in profile "addons-819803"
	I1030 18:22:35.435035  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435036  389930 addons.go:234] Setting addon volumesnapshots=true in "addons-819803"
	I1030 18:22:35.435042  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435050  389930 addons.go:69] Setting gcp-auth=true in profile "addons-819803"
	I1030 18:22:35.435053  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435052  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435064  389930 addons.go:69] Setting registry=true in profile "addons-819803"
	I1030 18:22:35.435072  389930 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-819803"
	I1030 18:22:35.435078  389930 addons.go:234] Setting addon registry=true in "addons-819803"
	I1030 18:22:35.435085  389930 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-819803"
	I1030 18:22:35.435088  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435102  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435104  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435270  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435277  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435052  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435301  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435304  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435045  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435024  389930 addons.go:234] Setting addon volcano=true in "addons-819803"
	I1030 18:22:35.435065  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.434461  389930 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-819803"
	I1030 18:22:35.435421  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435066  389930 mustload.go:65] Loading cluster: addons-819803
	I1030 18:22:35.435446  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435451  389930 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-819803"
	I1030 18:22:35.435453  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435470  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435474  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435489  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435074  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435427  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435522  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435529  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435371  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435643  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435672  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435011  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435842  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.434556  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435874  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435917  389930 config.go:182] Loaded profile config "addons-819803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:22:35.435932  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435951  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.436140  389930 out.go:177] * Verifying Kubernetes components...
	I1030 18:22:35.437892  389930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:22:35.451309  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I1030 18:22:35.451368  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1030 18:22:35.453601  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46125
	I1030 18:22:35.462832  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.462887  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.462972  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.463001  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.463551  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.463686  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.463753  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.464363  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.464385  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.464558  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.464571  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.464703  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.464717  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.464781  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.465247  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.465357  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.465408  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.465940  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.465971  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.466133  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.466738  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.466777  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.485973  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I1030 18:22:35.486582  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.487276  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.487297  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.487704  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.488285  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.488313  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.492645  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42839
	I1030 18:22:35.493108  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.493776  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.493794  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.494203  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.494763  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.494810  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.498141  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I1030 18:22:35.498749  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.498908  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I1030 18:22:35.499059  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1030 18:22:35.499493  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.499511  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.499947  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.499962  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.500031  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.500237  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.501018  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.501037  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.501227  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.501243  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.501514  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I1030 18:22:35.501593  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
	I1030 18:22:35.501620  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.502176  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.502201  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.502258  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.502299  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.502616  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.502997  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.503071  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.503087  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.503139  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.503152  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.503748  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.503812  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.504373  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.504413  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.504715  389930 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-819803"
	I1030 18:22:35.504766  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.505142  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.505163  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.505226  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.505258  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.508394  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.508760  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.508802  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.509523  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I1030 18:22:35.518589  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I1030 18:22:35.518805  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.519262  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.519843  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.519861  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.520269  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.520841  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.520886  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.521564  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.521582  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.522378  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.522899  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.525649  389930 addons.go:234] Setting addon default-storageclass=true in "addons-819803"
	I1030 18:22:35.525695  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.526069  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.526104  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.530233  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1030 18:22:35.532361  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
	I1030 18:22:35.532490  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I1030 18:22:35.532715  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.533144  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.533729  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.533748  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.533980  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1030 18:22:35.534311  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.534415  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35937
	I1030 18:22:35.534935  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.535082  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.535093  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.535591  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.535609  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.535669  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.536059  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.536109  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.536296  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.537061  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.538333  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.538380  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.538606  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.538959  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.538979  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.539176  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I1030 18:22:35.539402  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.539786  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.540242  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.540276  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.540577  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.540591  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.540620  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.541079  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.541202  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.541225  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.541275  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.541702  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.541728  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.542298  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.542833  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1030 18:22:35.543350  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.544084  389930 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1030 18:22:35.544122  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1030 18:22:35.544491  389930 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1030 18:22:35.544516  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.544558  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.544876  389930 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1030 18:22:35.545701  389930 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1030 18:22:35.545719  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1030 18:22:35.545737  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.546557  389930 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1030 18:22:35.546577  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1030 18:22:35.546650  389930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 18:22:35.546855  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.548517  389930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:22:35.548536  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 18:22:35.548554  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.548814  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.550798  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.550834  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.552339  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I1030 18:22:35.552540  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.552920  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.552988  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553045  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I1030 18:22:35.553192  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.553213  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553691  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.553714  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553739  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553773  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.553784  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.553803  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.553862  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.554360  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.554386  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.554459  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.554515  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.554556  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.554787  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.554791  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.554840  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.554991  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I1030 18:22:35.555005  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.555072  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.555532  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.555550  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.555584  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.555688  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.555731  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.555876  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.555949  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.556023  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.556067  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.556194  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.556310  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.556682  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.557573  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.557589  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.558162  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.558395  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.559021  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.559062  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.559868  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.559907  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.560163  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.562510  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
	I1030 18:22:35.562878  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.563668  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.563689  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.563805  389930 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1030 18:22:35.564280  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.566609  389930 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1030 18:22:35.566630  389930 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1030 18:22:35.566652  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.566780  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41045
	I1030 18:22:35.566908  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I1030 18:22:35.566978  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I1030 18:22:35.567356  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.567443  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.567955  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.567977  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.567998  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.568403  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.568422  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.568551  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.568563  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.568904  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.568965  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.569619  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.569661  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.569878  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.569943  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38761
	I1030 18:22:35.570506  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.570540  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.570887  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.571446  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I1030 18:22:35.571722  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I1030 18:22:35.571841  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.572052  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.572240  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.572339  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.572489  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.572505  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.572561  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.572610  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.572625  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.572794  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.572860  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.573063  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.573403  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.573556  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.574081  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.574424  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.574535  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.574670  389930 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1030 18:22:35.575587  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.575615  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.576085  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.576203  389930 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1030 18:22:35.576226  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1030 18:22:35.576246  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.576334  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1030 18:22:35.576524  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.576541  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.576595  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.577638  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.577932  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.579131  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1030 18:22:35.579837  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.579870  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.580727  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.580750  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.580977  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.581160  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.581320  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1030 18:22:35.581331  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.581554  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.581601  389930 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1030 18:22:35.583082  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 18:22:35.583111  389930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 18:22:35.583132  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.583762  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.584400  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1030 18:22:35.585201  389930 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1030 18:22:35.585346  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1030 18:22:35.586143  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.586272  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I1030 18:22:35.586283  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1030 18:22:35.586400  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.586407  389930 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1030 18:22:35.586419  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1030 18:22:35.586437  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.587120  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.587161  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.587180  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.587196  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.587225  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.587259  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.587444  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.587635  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.587698  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.587727  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.587742  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.587871  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.587935  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.588372  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.588527  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1030 18:22:35.588746  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.590435  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.590635  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.590887  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.590946  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1030 18:22:35.592013  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1030 18:22:35.592078  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I1030 18:22:35.592145  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.592214  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.590924  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.593583  389930 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1030 18:22:35.593641  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.593599  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1030 18:22:35.594311  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.594549  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.595018  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.595032  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1030 18:22:35.595063  389930 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1030 18:22:35.595069  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1030 18:22:35.595090  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1030 18:22:35.595102  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1030 18:22:35.595116  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.595090  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.596376  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.596393  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.597022  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.597191  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.597692  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1030 18:22:35.598868  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599162  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599207  389930 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1030 18:22:35.599227  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1030 18:22:35.599249  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.599271  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.599306  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599461  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.599650  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.599673  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.599689  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599798  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.599861  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.599981  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.600025  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.600178  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.600320  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.600687  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.602326  389930 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1030 18:22:35.602593  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.602996  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.603031  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.603140  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.603309  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.603425  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.603563  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.604780  389930 out.go:177]   - Using image docker.io/registry:2.8.3
	I1030 18:22:35.606269  389930 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1030 18:22:35.606290  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1030 18:22:35.606304  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.610628  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.610650  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.610656  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.610659  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I1030 18:22:35.610674  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	W1030 18:22:35.610825  389930 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33980->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.610860  389930 retry.go:31] will retry after 253.092561ms: ssh: handshake failed: read tcp 192.168.39.1:33980->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.610831  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.610932  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I1030 18:22:35.611120  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.611254  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.611371  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.611495  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.611890  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.611918  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.612221  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.612251  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.612295  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.612465  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.612604  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.612767  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.614187  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.614508  389930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 18:22:35.614525  389930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 18:22:35.614543  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.614573  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.615191  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:35.615215  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:35.615456  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:35.615470  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:35.615478  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:35.615485  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:35.617278  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:35.617282  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:35.617286  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
	I1030 18:22:35.617298  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	W1030 18:22:35.617397  389930 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1030 18:22:35.617875  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.618411  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.618428  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.619404  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.619617  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.621173  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.621393  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.621605  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.621625  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.621810  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.621988  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.622118  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.622245  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.622876  389930 out.go:177]   - Using image docker.io/busybox:stable
	I1030 18:22:35.623966  389930 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1030 18:22:35.625274  389930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1030 18:22:35.625287  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1030 18:22:35.625302  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	W1030 18:22:35.626300  389930 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33994->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.626330  389930 retry.go:31] will retry after 374.534654ms: ssh: handshake failed: read tcp 192.168.39.1:33994->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.627948  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.628263  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.628282  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.628425  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.628602  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.628727  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.628870  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.957970  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1030 18:22:36.004755  389930 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1030 18:22:36.004784  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1030 18:22:36.027108  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1030 18:22:36.038102  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1030 18:22:36.040156  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 18:22:36.040175  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1030 18:22:36.075024  389930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:22:36.075077  389930 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 18:22:36.098385  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1030 18:22:36.100387  389930 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1030 18:22:36.100405  389930 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1030 18:22:36.122900  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1030 18:22:36.122922  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1030 18:22:36.151720  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1030 18:22:36.153900  389930 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1030 18:22:36.153928  389930 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1030 18:22:36.164545  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1030 18:22:36.164566  389930 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1030 18:22:36.181052  389930 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1030 18:22:36.181075  389930 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1030 18:22:36.196331  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1030 18:22:36.202330  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:22:36.273990  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 18:22:36.274016  389930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 18:22:36.281266  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1030 18:22:36.281285  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1030 18:22:36.367552  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1030 18:22:36.376116  389930 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1030 18:22:36.376140  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1030 18:22:36.409552  389930 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1030 18:22:36.409585  389930 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1030 18:22:36.411121  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 18:22:36.411141  389930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 18:22:36.432216  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 18:22:36.504001  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1030 18:22:36.504035  389930 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1030 18:22:36.569445  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 18:22:36.579732  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1030 18:22:36.579758  389930 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1030 18:22:36.608843  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1030 18:22:36.608877  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1030 18:22:36.723100  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1030 18:22:36.723138  389930 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1030 18:22:36.728098  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1030 18:22:36.763069  389930 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 18:22:36.763095  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1030 18:22:36.899820  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1030 18:22:36.899858  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1030 18:22:36.901580  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1030 18:22:36.901600  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1030 18:22:37.025047  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 18:22:37.098218  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1030 18:22:37.098272  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1030 18:22:37.120310  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1030 18:22:37.442933  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1030 18:22:37.442961  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1030 18:22:37.764559  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1030 18:22:37.764596  389930 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1030 18:22:37.939580  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.9815629s)
	I1030 18:22:37.939644  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:37.939654  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:37.940053  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:37.940101  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:37.940113  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:37.940185  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:37.940205  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:37.940632  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:37.940683  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:38.082795  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1030 18:22:38.082828  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1030 18:22:38.379668  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1030 18:22:38.379692  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1030 18:22:38.800575  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1030 18:22:38.800609  389930 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1030 18:22:39.074620  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1030 18:22:39.168695  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.141549585s)
	I1030 18:22:39.168762  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:39.168779  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:39.169155  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:39.169179  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:39.169191  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:39.169206  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:39.169476  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:39.169505  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.412843  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.374694496s)
	I1030 18:22:40.412913  389930 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.337814371s)
	I1030 18:22:40.412852  389930 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.337789586s)
	I1030 18:22:40.412941  389930 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 18:22:40.412925  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413013  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413055  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.314639099s)
	I1030 18:22:40.413077  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.261328806s)
	I1030 18:22:40.413094  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413115  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413096  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413205  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413552  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.413569  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.413580  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413579  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.413588  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413589  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.413609  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413615  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413552  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.413672  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.413861  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.413873  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.414075  389930 node_ready.go:35] waiting up to 6m0s for node "addons-819803" to be "Ready" ...
	I1030 18:22:40.414111  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.414151  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.414158  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.414243  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.414252  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.414265  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.414272  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.414511  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.414556  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.414563  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.433985  389930 node_ready.go:49] node "addons-819803" has status "Ready":"True"
	I1030 18:22:40.434011  389930 node_ready.go:38] duration metric: took 19.91247ms for node "addons-819803" to be "Ready" ...
	I1030 18:22:40.434021  389930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:22:40.467116  389930 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:40.496238  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.496261  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.496595  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.496650  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.496664  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.954473  389930 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-819803" context rescaled to 1 replicas
	I1030 18:22:41.549413  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.353028673s)
	I1030 18:22:41.549442  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.347078515s)
	I1030 18:22:41.549468  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.549480  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.549491  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.549506  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.549850  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.549872  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:41.549872  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:41.549916  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.549931  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:41.549943  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.549972  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.549989  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:41.550013  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.550027  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.550245  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.550278  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:41.550294  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.550306  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:42.475696  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:42.590643  389930 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1030 18:22:42.590685  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:42.593365  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:42.593708  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:42.593743  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:42.593870  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:42.594095  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:42.594274  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:42.594420  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:43.072875  389930 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1030 18:22:43.252963  389930 addons.go:234] Setting addon gcp-auth=true in "addons-819803"
	I1030 18:22:43.253027  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:43.253346  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:43.253377  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:43.269120  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I1030 18:22:43.269658  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:43.270232  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:43.270252  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:43.270675  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:43.271270  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:43.271305  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:43.286031  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I1030 18:22:43.286520  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:43.286965  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:43.286986  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:43.287331  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:43.287518  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:43.288995  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:43.289223  389930 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1030 18:22:43.289251  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:43.291926  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:43.292328  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:43.292368  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:43.292563  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:43.292749  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:43.292900  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:43.293043  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:44.518725  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:44.580864  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.21326567s)
	I1030 18:22:44.580933  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.580948  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.580878  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.148631513s)
	I1030 18:22:44.580976  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.011496062s)
	I1030 18:22:44.581012  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581033  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581032  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.852909794s)
	I1030 18:22:44.581066  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581012  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581085  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581096  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581124  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.556043232s)
	W1030 18:22:44.581160  389930 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1030 18:22:44.581184  389930 retry.go:31] will retry after 340.663709ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1030 18:22:44.581191  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.460837207s)
	I1030 18:22:44.581225  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581238  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581364  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581371  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581384  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581394  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581404  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581411  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581420  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581428  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581487  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581528  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581550  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581565  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581577  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581583  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581586  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581592  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581600  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581606  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581611  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581852  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581876  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581875  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581887  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581895  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581898  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581903  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581904  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583363  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.583397  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.583403  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583533  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.583544  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583555  389930 addons.go:475] Verifying addon metrics-server=true in "addons-819803"
	I1030 18:22:44.583594  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.583618  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.583624  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583632  389930 addons.go:475] Verifying addon registry=true in "addons-819803"
	I1030 18:22:44.584036  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.584061  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.584072  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.584081  389930 addons.go:475] Verifying addon ingress=true in "addons-819803"
	I1030 18:22:44.585450  389930 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-819803 service yakd-dashboard -n yakd-dashboard
	
	I1030 18:22:44.586414  389930 out.go:177] * Verifying registry addon...
	I1030 18:22:44.586426  389930 out.go:177] * Verifying ingress addon...
	I1030 18:22:44.589069  389930 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1030 18:22:44.589144  389930 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1030 18:22:44.597152  389930 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1030 18:22:44.597170  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:44.598474  389930 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1030 18:22:44.598548  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:44.615580  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.615601  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.615905  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.615927  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.921991  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 18:22:45.114245  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:45.114680  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:45.163735  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.089050407s)
	I1030 18:22:45.163812  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:45.163835  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:45.163755  389930 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.874504711s)
	I1030 18:22:45.164138  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:45.164155  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:45.164165  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:45.164172  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:45.164443  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:45.164479  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:45.164491  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:45.164508  389930 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-819803"
	I1030 18:22:45.166224  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1030 18:22:45.166230  389930 out.go:177] * Verifying csi-hostpath-driver addon...
	I1030 18:22:45.167861  389930 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1030 18:22:45.168498  389930 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1030 18:22:45.169270  389930 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1030 18:22:45.169287  389930 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1030 18:22:45.203200  389930 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1030 18:22:45.203228  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:45.214184  389930 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1030 18:22:45.214221  389930 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1030 18:22:45.244889  389930 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1030 18:22:45.244916  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1030 18:22:45.270741  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1030 18:22:45.597581  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:45.597645  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:45.672682  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:46.094186  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:46.095345  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:46.195098  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:46.284240  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.362174451s)
	I1030 18:22:46.284320  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.284339  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.284604  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.284624  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.284634  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.284642  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.284910  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:46.284957  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.284966  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.602691  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:46.602834  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:46.671342  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.400556193s)
	I1030 18:22:46.671404  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.671422  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.671740  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:46.671808  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.671826  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.671834  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.671845  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.672099  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.672116  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.673028  389930 addons.go:475] Verifying addon gcp-auth=true in "addons-819803"
	I1030 18:22:46.674562  389930 out.go:177] * Verifying gcp-auth addon...
	I1030 18:22:46.676602  389930 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1030 18:22:46.697310  389930 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1030 18:22:46.697332  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:46.698624  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:46.973078  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:47.098020  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:47.098098  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:47.199593  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:47.200466  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:47.593521  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:47.593757  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:47.673750  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:47.681010  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:48.093405  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:48.093691  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:48.174901  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:48.179374  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:48.595041  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:48.595096  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:48.673532  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:48.679975  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:48.974841  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:49.094262  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:49.094992  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:49.173045  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:49.181121  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:49.597507  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:49.597602  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:49.673194  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:49.680081  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:50.094188  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:50.094680  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:50.172980  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:50.179849  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:50.594013  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:50.594618  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:50.672731  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:50.679741  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:51.095039  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:51.095526  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:51.172864  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:51.180106  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:51.473414  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:51.593705  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:51.594022  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:51.674027  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:51.680209  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:52.094236  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:52.095131  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:52.173677  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:52.179981  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:52.593985  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:52.594446  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:52.673335  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:52.679568  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:53.093716  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:53.095090  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:53.173239  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:53.179905  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:53.594023  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:53.594730  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:53.694328  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:53.694947  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:53.973465  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:54.094344  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:54.095162  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:54.173995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:54.180540  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:54.593546  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:54.594046  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:54.673610  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:54.679415  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:55.093651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:55.094909  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:55.173566  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:55.179242  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:55.594295  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:55.594676  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:55.673651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:55.678982  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:55.973664  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:56.093445  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:56.093483  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:56.173533  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:56.180737  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:56.593340  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:56.593576  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:56.676764  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:56.681704  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:57.093672  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:57.094163  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:57.174648  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:57.180289  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:57.593437  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:57.593501  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:57.693387  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:57.694620  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:57.972725  389930 pod_ready.go:93] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.972748  389930 pod_ready.go:82] duration metric: took 17.505602553s for pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.972759  389930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.974286  389930 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-m6svs" not found
	I1030 18:22:57.974306  389930 pod_ready.go:82] duration metric: took 1.541544ms for pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace to be "Ready" ...
	E1030 18:22:57.974316  389930 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-m6svs" not found
	I1030 18:22:57.974322  389930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6bct" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.978255  389930 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6bct" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.978271  389930 pod_ready.go:82] duration metric: took 3.943929ms for pod "coredns-7c65d6cfc9-r6bct" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.978280  389930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.981937  389930 pod_ready.go:93] pod "etcd-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.981951  389930 pod_ready.go:82] duration metric: took 3.666223ms for pod "etcd-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.981964  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.986192  389930 pod_ready.go:93] pod "kube-apiserver-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.986209  389930 pod_ready.go:82] duration metric: took 4.239262ms for pod "kube-apiserver-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.986217  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.093895  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:58.094769  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:58.171398  389930 pod_ready.go:93] pod "kube-controller-manager-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:58.171422  389930 pod_ready.go:82] duration metric: took 185.199113ms for pod "kube-controller-manager-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.171436  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h64nt" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.173620  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:58.178990  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:58.571482  389930 pod_ready.go:93] pod "kube-proxy-h64nt" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:58.571506  389930 pod_ready.go:82] duration metric: took 400.064383ms for pod "kube-proxy-h64nt" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.571517  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.592738  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:58.593069  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:58.674050  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:58.679466  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:58.972198  389930 pod_ready.go:93] pod "kube-scheduler-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:58.972222  389930 pod_ready.go:82] duration metric: took 400.698693ms for pod "kube-scheduler-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.972236  389930 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:59.093501  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:59.093937  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:59.172423  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:59.180556  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:59.594124  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:59.594585  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:59.695702  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:59.696372  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:00.093673  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:00.094093  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:00.173252  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:00.179707  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:00.593534  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:00.593828  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:00.673787  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:00.679058  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:00.978609  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:01.095181  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:01.095575  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:01.173367  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:01.180152  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:01.594323  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:01.594395  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:01.695847  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:01.697078  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:02.093975  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:02.094505  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:02.172803  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:02.179625  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:02.594454  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:02.594618  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:02.673194  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:02.680227  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:02.979206  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:03.095922  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:03.095931  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:03.173768  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:03.179891  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:03.594843  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:03.594993  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:03.676821  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:03.679765  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:04.093480  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:04.094441  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:04.174262  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:04.180169  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:04.595013  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:04.595100  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:04.674177  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:04.680717  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:05.094402  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:05.095041  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:05.176531  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:05.186931  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:05.478710  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:05.594673  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:05.594930  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:05.673122  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:05.679900  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:06.095226  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:06.095919  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:06.224367  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:06.225361  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:06.593285  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:06.593812  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:06.673002  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:06.679760  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:07.093233  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:07.094285  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:07.173113  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:07.179996  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:07.478831  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:07.594434  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:07.594900  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:07.672286  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:07.680041  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:08.193176  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:08.193533  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:08.194574  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:08.194671  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:08.594229  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:08.594244  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:08.672581  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:08.679202  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:09.094414  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:09.095185  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:09.173933  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:09.179490  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:09.480037  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:09.594951  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:09.595291  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:09.695737  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:09.696847  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:10.093847  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:10.094166  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:10.173051  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:10.180135  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:10.479141  389930 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"True"
	I1030 18:23:10.479172  389930 pod_ready.go:82] duration metric: took 11.506928864s for pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace to be "Ready" ...
	I1030 18:23:10.479191  389930 pod_ready.go:39] duration metric: took 30.045153099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:23:10.479212  389930 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:23:10.479275  389930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:23:10.500897  389930 api_server.go:72] duration metric: took 35.066550493s to wait for apiserver process to appear ...
	I1030 18:23:10.500933  389930 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:23:10.500956  389930 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1030 18:23:10.505343  389930 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1030 18:23:10.506391  389930 api_server.go:141] control plane version: v1.31.2
	I1030 18:23:10.506419  389930 api_server.go:131] duration metric: took 5.478536ms to wait for apiserver health ...
	I1030 18:23:10.506429  389930 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:23:10.514344  389930 system_pods.go:59] 18 kube-system pods found
	I1030 18:23:10.514372  389930 system_pods.go:61] "amd-gpu-device-plugin-sdqnr" [087eef61-5115-41c9-aa53-29d2c8c23625] Running
	I1030 18:23:10.514378  389930 system_pods.go:61] "coredns-7c65d6cfc9-r6bct" [a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee] Running
	I1030 18:23:10.514384  389930 system_pods.go:61] "csi-hostpath-attacher-0" [603a5497-a36a-4123-ad83-8159ef7c6494] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 18:23:10.514390  389930 system_pods.go:61] "csi-hostpath-resizer-0" [042a6627-5f58-4a7c-8adc-393f4a23de62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1030 18:23:10.514398  389930 system_pods.go:61] "csi-hostpathplugin-vswkz" [122041b3-674e-42ec-a5a8-ec4a2f43cbdf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 18:23:10.514403  389930 system_pods.go:61] "etcd-addons-819803" [a155caea-481f-4200-8f06-77f2a36ed538] Running
	I1030 18:23:10.514407  389930 system_pods.go:61] "kube-apiserver-addons-819803" [c29acd73-ad14-4526-a8fa-53918e19264d] Running
	I1030 18:23:10.514412  389930 system_pods.go:61] "kube-controller-manager-addons-819803" [9a0525de-668d-41e1-91ba-16e3318e81e3] Running
	I1030 18:23:10.514416  389930 system_pods.go:61] "kube-ingress-dns-minikube" [a73fe2e4-a20e-4734-85d4-3da77152e4a1] Running
	I1030 18:23:10.514420  389930 system_pods.go:61] "kube-proxy-h64nt" [6f813bf3-f5de-4af3-87eb-4a429a334e7f] Running
	I1030 18:23:10.514425  389930 system_pods.go:61] "kube-scheduler-addons-819803" [3e0b4b8d-2392-4cc4-8c7d-b8a4f22749ca] Running
	I1030 18:23:10.514430  389930 system_pods.go:61] "metrics-server-84c5f94fbc-trqq2" [07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 18:23:10.514434  389930 system_pods.go:61] "nvidia-device-plugin-daemonset-s2tw8" [9aca0151-3bc1-4504-b8ba-0e3d70a68fba] Running
	I1030 18:23:10.514439  389930 system_pods.go:61] "registry-66c9cd494c-lwc9j" [ac1aec3e-8d69-4d98-875c-68c50389cf77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 18:23:10.514446  389930 system_pods.go:61] "registry-proxy-lhldq" [9edc008f-8004-45b8-a42f-897dcda09957] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 18:23:10.514453  389930 system_pods.go:61] "snapshot-controller-56fcc65765-4f2mt" [4ef57b7b-170b-4404-8af9-36d355a9be09] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.514458  389930 system_pods.go:61] "snapshot-controller-56fcc65765-k4fwb" [c0ffdb47-736c-4a9f-a9b6-d99bf84b26cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.514466  389930 system_pods.go:61] "storage-provisioner" [38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f] Running
	I1030 18:23:10.514471  389930 system_pods.go:74] duration metric: took 8.035436ms to wait for pod list to return data ...
	I1030 18:23:10.514479  389930 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:23:10.516936  389930 default_sa.go:45] found service account: "default"
	I1030 18:23:10.516955  389930 default_sa.go:55] duration metric: took 2.468748ms for default service account to be created ...
	I1030 18:23:10.516966  389930 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:23:10.524291  389930 system_pods.go:86] 18 kube-system pods found
	I1030 18:23:10.524315  389930 system_pods.go:89] "amd-gpu-device-plugin-sdqnr" [087eef61-5115-41c9-aa53-29d2c8c23625] Running
	I1030 18:23:10.524321  389930 system_pods.go:89] "coredns-7c65d6cfc9-r6bct" [a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee] Running
	I1030 18:23:10.524328  389930 system_pods.go:89] "csi-hostpath-attacher-0" [603a5497-a36a-4123-ad83-8159ef7c6494] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 18:23:10.524335  389930 system_pods.go:89] "csi-hostpath-resizer-0" [042a6627-5f58-4a7c-8adc-393f4a23de62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1030 18:23:10.524342  389930 system_pods.go:89] "csi-hostpathplugin-vswkz" [122041b3-674e-42ec-a5a8-ec4a2f43cbdf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 18:23:10.524348  389930 system_pods.go:89] "etcd-addons-819803" [a155caea-481f-4200-8f06-77f2a36ed538] Running
	I1030 18:23:10.524355  389930 system_pods.go:89] "kube-apiserver-addons-819803" [c29acd73-ad14-4526-a8fa-53918e19264d] Running
	I1030 18:23:10.524358  389930 system_pods.go:89] "kube-controller-manager-addons-819803" [9a0525de-668d-41e1-91ba-16e3318e81e3] Running
	I1030 18:23:10.524365  389930 system_pods.go:89] "kube-ingress-dns-minikube" [a73fe2e4-a20e-4734-85d4-3da77152e4a1] Running
	I1030 18:23:10.524368  389930 system_pods.go:89] "kube-proxy-h64nt" [6f813bf3-f5de-4af3-87eb-4a429a334e7f] Running
	I1030 18:23:10.524374  389930 system_pods.go:89] "kube-scheduler-addons-819803" [3e0b4b8d-2392-4cc4-8c7d-b8a4f22749ca] Running
	I1030 18:23:10.524379  389930 system_pods.go:89] "metrics-server-84c5f94fbc-trqq2" [07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 18:23:10.524386  389930 system_pods.go:89] "nvidia-device-plugin-daemonset-s2tw8" [9aca0151-3bc1-4504-b8ba-0e3d70a68fba] Running
	I1030 18:23:10.524391  389930 system_pods.go:89] "registry-66c9cd494c-lwc9j" [ac1aec3e-8d69-4d98-875c-68c50389cf77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 18:23:10.524395  389930 system_pods.go:89] "registry-proxy-lhldq" [9edc008f-8004-45b8-a42f-897dcda09957] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 18:23:10.524404  389930 system_pods.go:89] "snapshot-controller-56fcc65765-4f2mt" [4ef57b7b-170b-4404-8af9-36d355a9be09] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.524412  389930 system_pods.go:89] "snapshot-controller-56fcc65765-k4fwb" [c0ffdb47-736c-4a9f-a9b6-d99bf84b26cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.524416  389930 system_pods.go:89] "storage-provisioner" [38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f] Running
	I1030 18:23:10.524422  389930 system_pods.go:126] duration metric: took 7.450347ms to wait for k8s-apps to be running ...
	I1030 18:23:10.524430  389930 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:23:10.524471  389930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:23:10.539221  389930 system_svc.go:56] duration metric: took 14.783961ms WaitForService to wait for kubelet
	I1030 18:23:10.539245  389930 kubeadm.go:582] duration metric: took 35.104907783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:23:10.539264  389930 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:23:10.542297  389930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:23:10.542317  389930 node_conditions.go:123] node cpu capacity is 2
	I1030 18:23:10.542330  389930 node_conditions.go:105] duration metric: took 3.061438ms to run NodePressure ...
	I1030 18:23:10.542341  389930 start.go:241] waiting for startup goroutines ...
	I1030 18:23:10.593962  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:10.594337  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:10.673558  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:10.680530  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:11.093537  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:11.094028  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:11.173642  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:11.179449  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:11.593810  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:11.594016  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:11.673368  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:11.680170  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:12.093882  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:12.094051  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:12.173092  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:12.180291  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:12.593875  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:12.594188  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:12.674239  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:12.680127  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:13.093184  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:13.093962  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:13.173953  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:13.179889  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:13.593935  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:13.594480  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:13.674127  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:13.680152  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:14.093083  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:14.093521  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:14.173074  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:14.179979  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:14.594010  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:14.594615  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:14.673022  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:14.679841  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:15.094557  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:15.094790  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:15.173111  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:15.180044  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:15.592703  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:15.593241  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:15.673011  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:15.679543  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:16.093078  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:16.094033  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:16.174034  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:16.180058  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:16.595014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:16.595425  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:16.673998  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:16.680962  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:17.093712  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:17.094511  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:17.173552  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:17.180520  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:18.099441  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:18.099510  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:18.099888  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:18.099960  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:18.106021  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:18.110275  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:18.173316  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:18.183712  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:18.594754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:18.595417  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:18.673524  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:18.680456  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:19.094718  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:19.095083  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:19.173692  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:19.179664  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:19.594307  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:19.594686  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:19.673086  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:19.679610  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:20.094697  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:20.094978  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:20.174218  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:20.179700  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:20.593387  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:20.593879  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:20.673194  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:20.679899  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:21.093960  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:21.094078  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:21.173014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:21.179884  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:21.593870  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:21.594257  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:21.672694  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:21.679347  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:22.094706  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:22.094796  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:22.173211  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:22.179903  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:22.594472  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:22.594806  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:22.673896  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:22.679722  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:23.094632  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:23.094700  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:23.173935  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:23.181858  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:23.594146  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:23.594293  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:23.673429  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:23.685058  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:24.094680  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:24.094690  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:24.174012  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:24.179934  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:24.594408  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:24.595024  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:24.673238  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:24.680203  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:25.093348  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:25.094470  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:25.173322  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:25.181179  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:25.594115  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:25.594938  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:25.673583  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:25.679422  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:26.094560  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:26.094673  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:26.173810  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:26.180276  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:26.593811  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:26.594073  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:26.676332  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:26.680034  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:27.093005  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:27.093065  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:27.174815  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:27.179262  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:27.593547  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:27.593968  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:27.674025  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:27.679142  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:28.093957  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:28.094060  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:28.172651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:28.179317  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:28.593503  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:28.594792  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:28.673412  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:28.680237  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:29.093876  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:29.094309  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:29.173178  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:29.179854  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:29.594521  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:29.594631  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:29.673591  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:29.680051  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:30.093537  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:30.094642  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:30.173318  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:30.180489  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:30.596012  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:30.598053  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:30.673804  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:30.679650  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:31.093925  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:31.094284  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:31.172722  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:31.179406  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:31.594413  389930 kapi.go:107] duration metric: took 47.005339132s to wait for kubernetes.io/minikube-addons=registry ...
	I1030 18:23:31.594473  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:31.673224  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:31.680242  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:32.093337  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:32.194782  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:32.195545  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:32.594498  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:32.673210  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:32.680467  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:33.095585  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:33.175791  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:33.181132  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:33.593373  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:33.693502  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:33.694815  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:34.093428  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:34.173603  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:34.179147  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:34.593421  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:34.673068  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:34.679760  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:35.093607  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:35.173378  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:35.180673  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:35.594407  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:35.672786  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:35.679448  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:36.093916  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:36.173812  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:36.179204  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:36.593558  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:36.673865  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:36.679993  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:37.094100  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:37.173398  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:37.180731  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:37.593635  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:37.673661  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:37.679363  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:38.093449  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:38.172889  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:38.180199  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:38.593670  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:38.673629  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:38.679494  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:39.093793  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:39.173529  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:39.179204  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:39.594358  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:39.673159  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:39.679968  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:40.094909  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:40.173170  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:40.180006  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:40.594746  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:40.673068  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:40.680448  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:41.093633  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:41.173200  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:41.180095  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:41.594547  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:41.673348  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:41.679788  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:42.094533  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:42.173027  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:42.179272  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:42.593664  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:42.673537  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:42.679754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:43.094375  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:43.173615  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:43.180138  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:43.593449  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:43.673179  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:43.680290  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:44.093485  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:44.173511  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:44.180181  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:44.593750  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:44.675005  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:44.679600  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:45.093955  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:45.173399  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:45.180056  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:45.594095  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:45.673746  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:45.680137  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:46.092974  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:46.173700  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:46.179007  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:46.594855  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:46.673615  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:46.679637  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:47.094229  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:47.172665  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:47.179213  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:47.593129  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:47.673754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:47.679316  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:48.093801  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:48.173501  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:48.178955  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:48.594585  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:48.673118  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:48.679815  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:49.094235  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:49.172965  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:49.179493  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:49.593729  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:49.673385  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:49.680777  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:50.094291  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:50.177722  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:50.179989  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:50.593948  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:50.694543  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:50.694794  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:51.093603  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:51.173472  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:51.180912  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:51.593913  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:51.673595  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:51.679164  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:52.093259  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:52.173043  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:52.179784  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:52.594570  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:52.673014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:52.679950  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:53.094306  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:53.173035  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:53.179606  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:53.594125  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:53.673506  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:53.680061  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:54.094386  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:54.173381  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:54.180231  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:54.593300  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:54.672567  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:54.679256  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:55.093827  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:55.173733  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:55.179275  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:55.595288  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:55.674206  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:55.679456  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:56.093919  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:56.173328  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:56.180299  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:56.593461  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:56.672852  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:56.679756  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:57.093681  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:57.173620  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:57.179424  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:57.593912  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:57.673333  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:57.680332  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:58.093818  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:58.173394  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:58.180060  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:58.594281  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:58.672892  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:58.680303  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:59.093698  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:59.173288  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:59.179786  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:59.594131  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:59.673768  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:59.679758  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:00.094467  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:00.173272  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:00.179894  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:00.594040  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:00.673583  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:00.679341  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:01.094018  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:01.173467  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:01.180650  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:01.593821  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:01.673365  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:01.680011  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:02.094158  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:02.174149  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:02.180153  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:02.595432  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:02.695573  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:02.696244  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:03.094451  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:03.174737  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:03.179451  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:03.593866  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:03.674054  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:03.679827  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:04.094115  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:04.173478  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:04.180198  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:04.594090  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:04.673593  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:04.679250  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:05.094353  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:05.172803  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:05.179943  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:05.594096  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:05.673750  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:05.679746  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:06.093972  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:06.173843  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:06.179529  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:06.615729  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:06.673604  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:06.680702  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:07.093812  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:07.173599  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:07.179080  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:07.593404  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:07.673624  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:07.679350  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:08.093326  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:08.173387  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:08.179579  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:08.593688  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:08.673427  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:08.680200  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:09.093636  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:09.173487  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:09.180265  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:09.595037  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:09.673727  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:09.679754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:10.094006  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:10.173570  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:10.179688  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:10.597969  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:10.676156  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:10.679352  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:11.094048  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:11.173259  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:11.180268  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:11.594513  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:11.673567  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:11.679065  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:12.094203  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:12.172468  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:12.180598  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:12.593626  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:12.673184  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:12.680109  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:13.094536  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:13.173177  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:13.179982  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:13.600465  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:13.674835  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:13.679519  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:14.094296  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:14.173514  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:14.180031  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:14.594409  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:14.674172  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:14.680001  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:15.094774  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:15.173547  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:15.179313  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:15.594436  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:15.677184  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:15.679534  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:16.094057  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:16.174052  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:16.179495  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:16.593755  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:16.673865  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:16.687367  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:17.093528  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:17.173218  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:17.180624  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:17.593975  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:17.673784  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:17.679472  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:18.093220  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:18.173453  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:18.180027  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:18.593267  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:18.685582  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:18.688347  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:19.093356  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:19.173107  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:19.180973  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:19.594839  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:19.673958  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:19.680651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:20.093927  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:20.173540  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:20.180235  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:20.594184  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:20.673106  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:20.679587  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:21.272035  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:21.272988  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:21.273121  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:21.594877  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:21.673161  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:21.679202  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:22.093995  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:22.192896  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:22.193837  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:22.593187  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:22.672429  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:22.680247  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:23.093955  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:23.173117  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:23.179885  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:23.594077  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:23.673424  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:23.680603  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:24.093990  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:24.195378  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:24.195749  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:24.592999  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:24.673558  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:24.680121  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:25.094313  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:25.173515  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:25.179235  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:25.594548  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:25.673169  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:25.679942  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:26.095069  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:26.173651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:26.179433  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:26.593173  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:26.674557  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:26.680365  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:27.093953  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:27.194133  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:27.194966  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:27.594461  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:27.672907  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:27.680225  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:28.093262  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:28.172549  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:28.179213  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:28.593588  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:28.673421  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:28.680745  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:29.094332  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:29.195356  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:29.196302  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:29.595102  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:29.673704  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:29.680491  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:30.093968  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:30.173164  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:30.180253  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:30.593982  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:30.673433  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:30.679530  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:31.097174  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:31.194294  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:31.195636  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:31.593982  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:31.694498  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:31.695342  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:32.096092  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:32.180338  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:32.258079  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:32.594665  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:32.673118  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:32.680749  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:33.094653  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:33.173187  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:33.180235  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:33.593028  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:33.673849  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:33.679745  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:34.093717  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:34.193799  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:34.195148  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:34.599179  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:34.696446  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:34.697679  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:35.097007  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:35.196951  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:35.198118  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:35.594192  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:35.673067  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:35.680274  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:36.093565  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:36.173639  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:36.179881  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:36.881593  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:36.881990  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:36.882423  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:37.098958  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:37.195958  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:37.196949  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:37.595421  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:37.674497  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:37.680037  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:38.094458  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:38.173304  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:38.179739  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:38.594120  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:38.676073  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:38.679864  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:39.094825  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:39.194139  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:39.195523  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:39.595192  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:39.672897  389930 kapi.go:107] duration metric: took 1m54.504397359s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1030 18:24:39.679388  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:40.094319  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:40.179995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:40.594718  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:40.680403  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:41.095668  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:41.180189  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:41.594599  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:41.679963  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:42.094589  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:42.180100  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:42.593185  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:42.680714  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:43.094711  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:43.180978  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:43.594763  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:43.680837  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:44.094677  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:44.181181  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:44.593310  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:44.681202  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:45.093951  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:45.180476  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:45.594081  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:45.680811  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:46.094359  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:46.180579  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:46.593875  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:46.680975  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:47.094454  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:47.179798  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:47.594397  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:47.680406  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:48.093429  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:48.180873  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:48.594804  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:48.680699  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:49.094275  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:49.194240  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:49.593968  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:49.680637  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:50.095297  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:50.180995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:50.593739  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:50.680477  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:51.093929  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:51.180674  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:51.593761  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:51.680046  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:52.093438  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:52.180854  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:52.594874  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:52.680910  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:53.094637  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:53.179699  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:53.593709  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:53.680243  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:54.093878  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:54.193344  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:54.594089  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:54.680748  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:55.094444  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:55.180010  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:55.594584  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:55.680697  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:56.094356  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:56.180240  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:56.594302  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:56.680834  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:57.094655  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:57.180457  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:57.593983  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:57.681035  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:58.094540  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:58.180966  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:58.594113  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:58.680890  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:59.094284  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:59.193734  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:59.594460  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:59.680225  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:00.096127  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:00.180877  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:00.594146  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:00.680164  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:01.093394  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:01.181049  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:01.594128  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:01.680709  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:02.094832  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:02.180069  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:02.593917  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:02.685615  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:03.095000  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:03.180249  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:03.593404  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:03.680224  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:04.124310  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:04.223096  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:04.594530  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:04.680995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:05.096377  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:05.195014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:05.594615  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:05.682827  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:06.094286  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:06.180987  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:06.594203  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:06.681056  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:07.093925  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:07.181643  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:07.594263  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:07.680837  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:08.094185  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:08.180684  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:08.594282  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:08.681430  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:09.094048  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:09.180799  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:09.593969  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:09.680179  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:10.093325  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:10.193770  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:10.593981  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:10.680548  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:11.094538  389930 kapi.go:107] duration metric: took 2m26.505393433s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1030 18:25:11.194099  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:11.680740  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:12.180310  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:12.680788  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:13.181506  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:13.680790  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:14.180667  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:14.681033  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:15.192632  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:15.680321  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:16.181575  389930 kapi.go:107] duration metric: took 2m29.50496643s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1030 18:25:16.183393  389930 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-819803 cluster.
	I1030 18:25:16.184880  389930 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1030 18:25:16.186226  389930 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1030 18:25:16.187921  389930 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1030 18:25:16.189134  389930 addons.go:510] duration metric: took 2m40.754794981s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher inspektor-gadget storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1030 18:25:16.189188  389930 start.go:246] waiting for cluster config update ...
	I1030 18:25:16.189208  389930 start.go:255] writing updated cluster config ...
	I1030 18:25:16.189476  389930 ssh_runner.go:195] Run: rm -f paused
	I1030 18:25:16.241409  389930 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 18:25:16.243157  389930 out.go:177] * Done! kubectl is now configured to use "addons-819803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.349191660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c54d098f-b4e3-402f-be70-08a378e82eeb name=/runtime.v1.RuntimeService/Version
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.350339728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7864900-079a-4905-b3c0-a7e16e8fd579 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.351483409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730312902351458523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7864900-079a-4905-b3c0-a7e16e8fd579 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.352018196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be74c2d1-cfba-4f53-bb9d-22181d75eb5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.352071172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be74c2d1-cfba-4f53-bb9d-22181d75eb5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.352457572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-93cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edbc230a9808db0529a97b23e10c791392f91e487052a16d1c9e011d18a68001,PodSandboxId:a747ea76a702d8b50af79c41a99ca8a9843ac6b929585d92b5428ce353a1e599,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730312709804420617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-ldznj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd4ea8d5-53b2-4184-9e06-e4a2b2ed1cb7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d8d39cfc9d9aecb7252739cf88a47668fe8a6e11d213f3e8f7a90cbc5b0a4b85,PodSandboxId:f0a461d3cd71eb98827de74994629ec64ce78f756262274be838fef4937482b0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730312670734611321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5tqhz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cfda448f-fe2b-4686-91af-11fa015db368,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e960fa4437c475ad809c1b495b3c949cee58f6f50fad4ad9fb0f740bd66ca3a1,PodSandboxId:7eae22cf38b34ec91abffe9acd746ee050c256e36c4357cb6eb39f104466fc65,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730312670605126069,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hqldd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2577aa35-0151-41fd-b12b-ea7800bbba00,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41635f5b9f936adf90ce97863e90036509efe92e28230106de83872fcb52cd14,PodSandboxId:b5ed56b9d2636e6c5abbfc8a528af47128abe02ccf60a31b2f7329c7a9dbdddc,Metadata:&ContainerMetadata{Nam
e:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730312573341424176,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73fe2e4-a20e-4734-85d4-3da77152e4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2a
cd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d
97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be74c2d1-cfba-4f53-bb9d-22181d75eb5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.386546248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8d7c149-4775-4654-b043-abe0c8335559 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.386621012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8d7c149-4775-4654-b043-abe0c8335559 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.388446149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e6a4f93-8a84-4914-b82a-df8081b591b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.389998426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730312902389938491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e6a4f93-8a84-4914-b82a-df8081b591b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.390662210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a49cbc3d-c443-45be-b66e-94aa867ff320 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.390748661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a49cbc3d-c443-45be-b66e-94aa867ff320 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.391241975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-93cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edbc230a9808db0529a97b23e10c791392f91e487052a16d1c9e011d18a68001,PodSandboxId:a747ea76a702d8b50af79c41a99ca8a9843ac6b929585d92b5428ce353a1e599,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730312709804420617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-ldznj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd4ea8d5-53b2-4184-9e06-e4a2b2ed1cb7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d8d39cfc9d9aecb7252739cf88a47668fe8a6e11d213f3e8f7a90cbc5b0a4b85,PodSandboxId:f0a461d3cd71eb98827de74994629ec64ce78f756262274be838fef4937482b0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730312670734611321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5tqhz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cfda448f-fe2b-4686-91af-11fa015db368,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e960fa4437c475ad809c1b495b3c949cee58f6f50fad4ad9fb0f740bd66ca3a1,PodSandboxId:7eae22cf38b34ec91abffe9acd746ee050c256e36c4357cb6eb39f104466fc65,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730312670605126069,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hqldd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2577aa35-0151-41fd-b12b-ea7800bbba00,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41635f5b9f936adf90ce97863e90036509efe92e28230106de83872fcb52cd14,PodSandboxId:b5ed56b9d2636e6c5abbfc8a528af47128abe02ccf60a31b2f7329c7a9dbdddc,Metadata:&ContainerMetadata{Nam
e:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730312573341424176,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73fe2e4-a20e-4734-85d4-3da77152e4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2a
cd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d
97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a49cbc3d-c443-45be-b66e-94aa867ff320 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.426644406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c311e69-b3dd-4043-a5f1-b98154264edd name=/runtime.v1.RuntimeService/Version
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.426740176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c311e69-b3dd-4043-a5f1-b98154264edd name=/runtime.v1.RuntimeService/Version
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.428622522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1b09d7f-6055-4e2f-beaa-efc2c831cd57 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.431044662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730312902431017729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1b09d7f-6055-4e2f-beaa-efc2c831cd57 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.431559803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca31a227-b61e-42a3-b409-a5a0406a361e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.431634147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca31a227-b61e-42a3-b409-a5a0406a361e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.432023687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-93cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edbc230a9808db0529a97b23e10c791392f91e487052a16d1c9e011d18a68001,PodSandboxId:a747ea76a702d8b50af79c41a99ca8a9843ac6b929585d92b5428ce353a1e599,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730312709804420617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-ldznj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd4ea8d5-53b2-4184-9e06-e4a2b2ed1cb7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d8d39cfc9d9aecb7252739cf88a47668fe8a6e11d213f3e8f7a90cbc5b0a4b85,PodSandboxId:f0a461d3cd71eb98827de74994629ec64ce78f756262274be838fef4937482b0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730312670734611321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5tqhz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cfda448f-fe2b-4686-91af-11fa015db368,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e960fa4437c475ad809c1b495b3c949cee58f6f50fad4ad9fb0f740bd66ca3a1,PodSandboxId:7eae22cf38b34ec91abffe9acd746ee050c256e36c4357cb6eb39f104466fc65,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730312670605126069,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hqldd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2577aa35-0151-41fd-b12b-ea7800bbba00,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41635f5b9f936adf90ce97863e90036509efe92e28230106de83872fcb52cd14,PodSandboxId:b5ed56b9d2636e6c5abbfc8a528af47128abe02ccf60a31b2f7329c7a9dbdddc,Metadata:&ContainerMetadata{Nam
e:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730312573341424176,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73fe2e4-a20e-4734-85d4-3da77152e4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2a
cd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d
97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca31a227-b61e-42a3-b409-a5a0406a361e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.449804730Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b344108c-0101-4384-8fe7-ef1fd2a90349 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.450329539Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:20740f6d93f54f8438244936b7a2473d3009679b2036672f630e2cab7e143bc0,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-srpgj,Uid:a0d95ce8-668d-4d45-a042-299981601dff,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312901393232794,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srpgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0d95ce8-668d-4d45-a042-299981601dff,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:28:21.083674341Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&PodSandboxMetadata{Name:nginx,Uid:591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1730312758853181314,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:25:58.540542131Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b594df1b-adba-4e23-93cc-29d66c8cf9f1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312718953039005,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-93cc-29d66c8cf9f1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:25:18.639987389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a747ea76a702d8b50a
f79c41a99ca8a9843ac6b929585d92b5428ce353a1e599,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-5f85ff4588-ldznj,Uid:dd4ea8d5-53b2-4184-9e06-e4a2b2ed1cb7,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312702453684091,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-ldznj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd4ea8d5-53b2-4184-9e06-e4a2b2ed1cb7,pod-template-hash: 5f85ff4588,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:22:44.486410243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,Namespace:kube-system,Attempt:0,},State:SANDBOX_REA
DY,CreatedAt:1730312561628914405,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":
\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-30T18:22:41.302519773Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-trqq2,Uid:07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312561379339716,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:22:41.044603109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5ed56b9d2636e6c5abbfc8a528af47128abe02ccf60a31b2f7329c7a9dbdddc,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:a73fe2e4-a20e-4734-85d4-3da77152e4a1,Namespace:
kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312559963705481,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73fe2e4-a20e-4734-85d4-3da77152e4a1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingres
s-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-10-30T18:22:39.590041470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-sdqnr,Uid:087eef61-5115-41c9-aa53-29d2c8c23625,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312558276996606,Labels:map[string]string{controller-revision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:22:37.957682003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5c24c6a457fb9
a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6bct,Uid:a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312556279989771,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:22:35.668523695Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&PodSandboxMetadata{Name:kube-proxy-h64nt,Uid:6f813bf3-f5de-4af3-87eb-4a429a334e7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312555991894284,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h6
4nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-30T18:22:35.071718015Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-819803,Uid:e51bdee3c48682a1d27375eee86f91f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312544789050403,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e51bdee3c48682a1d27375eee86f91f4,kubernetes.io/config.seen: 2024-10-30T18:22:24.098500308Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSa
ndbox{Id:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-819803,Uid:59aa5f01b68c2947293545aebe8e4550,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312544783506793,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59aa5f01b68c2947293545aebe8e4550,kubernetes.io/config.seen: 2024-10-30T18:22:24.098499424Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-819803,Uid:e00f20a18d20e63cdeb94703c7aefb4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312544771568854,Labels:map[strin
g]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.211:8443,kubernetes.io/config.hash: e00f20a18d20e63cdeb94703c7aefb4a,kubernetes.io/config.seen: 2024-10-30T18:22:24.098498292Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&PodSandboxMetadata{Name:etcd-addons-819803,Uid:e30a9531527f91cac7d80543a7c67b8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730312544764389755,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,tier: control-plane,},Annot
ations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.211:2379,kubernetes.io/config.hash: e30a9531527f91cac7d80543a7c67b8b,kubernetes.io/config.seen: 2024-10-30T18:22:24.098494900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b344108c-0101-4384-8fe7-ef1fd2a90349 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.450995734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b29513a-2a40-4415-b759-0303ed67b051 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.451061959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b29513a-2a40-4415-b759-0303ed67b051 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:28:22 addons-819803 crio[666]: time="2024-10-30 18:28:22.451527440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-93cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edbc230a9808db0529a97b23e10c791392f91e487052a16d1c9e011d18a68001,PodSandboxId:a747ea76a702d8b50af79c41a99ca8a9843ac6b929585d92b5428ce353a1e599,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730312709804420617,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-ldznj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd4ea8d5-53b2-4184-9e06-e4a2b2ed1cb7,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9c
faaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274
e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41635f5b9f936adf90ce97863e90036509efe92e28230106de83872fcb52cd14,PodSandboxId:b5ed56b9d2636e6c5abbfc8a528af47128abe02ccf60a31b2f7329c7a9dbdddc,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-mini
kube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730312573341424176,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73fe2e4-a20e-4734-85d4-3da77152e4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2acd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d
98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511
b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4
f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f
2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778
a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=1b29513a-2a40-4415-b759-0303ed67b051 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8323c4d9fbaf0       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   dd5e3d5f78cee       nginx
	23b8c4da95966       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   d6196190a1f0e       busybox
	edbc230a9808d       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   a747ea76a702d       ingress-nginx-controller-5f85ff4588-ldznj
	d8d39cfc9d9ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   f0a461d3cd71e       ingress-nginx-admission-patch-5tqhz
	e960fa4437c47       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   7eae22cf38b34       ingress-nginx-admission-create-hqldd
	e337d053d97a9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   634b9431071e5       metrics-server-84c5f94fbc-trqq2
	a923e14396505       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   6e3d73e7493a9       amd-gpu-device-plugin-sdqnr
	41635f5b9f936       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   b5ed56b9d2636       kube-ingress-dns-minikube
	1f46b0ca80854       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   c0244d7740a71       storage-provisioner
	d67f4b1b6f0d2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   b5c24c6a457fb       coredns-7c65d6cfc9-r6bct
	8aabc7e519d19       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   95ba2e67708c3       kube-proxy-h64nt
	e70c279e30dcc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   3432d90eee1bf       etcd-addons-819803
	430e9b4f16ec1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   638afc3dd0361       kube-scheduler-addons-819803
	3d74745cb9482       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   4c3f08b7bad80       kube-controller-manager-addons-819803
	805059b66c577       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   51945c273e8d3       kube-apiserver-addons-819803
	
	
	==> coredns [d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d97727e215999fa8] <==
	[INFO] 10.244.0.8:50938 - 31605 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000508441s
	[INFO] 10.244.0.8:50938 - 51844 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000087721s
	[INFO] 10.244.0.8:50938 - 47325 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000054642s
	[INFO] 10.244.0.8:50938 - 46040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000087277s
	[INFO] 10.244.0.8:50938 - 3926 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000224092s
	[INFO] 10.244.0.8:50938 - 24218 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000087031s
	[INFO] 10.244.0.8:50938 - 34827 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000196308s
	[INFO] 10.244.0.8:57107 - 7061 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000053883s
	[INFO] 10.244.0.8:57107 - 6720 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030962s
	[INFO] 10.244.0.8:39206 - 53093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114146s
	[INFO] 10.244.0.8:39206 - 52872 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000039074s
	[INFO] 10.244.0.8:40038 - 16250 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033182s
	[INFO] 10.244.0.8:40038 - 16527 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065113s
	[INFO] 10.244.0.8:46217 - 21236 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000034964s
	[INFO] 10.244.0.8:46217 - 20794 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000037358s
	[INFO] 10.244.0.23:33031 - 36217 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000605344s
	[INFO] 10.244.0.23:58783 - 39311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159162s
	[INFO] 10.244.0.23:37168 - 23567 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112203s
	[INFO] 10.244.0.23:52500 - 36290 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000061048s
	[INFO] 10.244.0.23:41802 - 17032 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107391s
	[INFO] 10.244.0.23:35204 - 24147 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016304s
	[INFO] 10.244.0.23:52880 - 32779 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004092344s
	[INFO] 10.244.0.23:54825 - 48371 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.006010414s
	[INFO] 10.244.0.26:55513 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000300522s
	[INFO] 10.244.0.26:38656 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000193553s
	
	
	==> describe nodes <==
	Name:               addons-819803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-819803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=addons-819803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T18_22_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-819803
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:22:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-819803
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:28:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:26:35 +0000   Wed, 30 Oct 2024 18:22:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:26:35 +0000   Wed, 30 Oct 2024 18:22:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:26:35 +0000   Wed, 30 Oct 2024 18:22:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:26:35 +0000   Wed, 30 Oct 2024 18:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    addons-819803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 3384241bc3144ca39ea65062097c3a72
	  System UUID:                3384241b-c314-4ca3-9ea6-5062097c3a72
	  Boot ID:                    e76ddacb-724b-468c-9414-b0b4a3bd3a72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-55bf9c44b4-srpgj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-ldznj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m38s
	  kube-system                 amd-gpu-device-plugin-sdqnr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 coredns-7c65d6cfc9-r6bct                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m47s
	  kube-system                 etcd-addons-819803                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m53s
	  kube-system                 kube-apiserver-addons-819803                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-controller-manager-addons-819803        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-h64nt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-scheduler-addons-819803                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 metrics-server-84c5f94fbc-trqq2              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m42s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m45s                  kube-proxy       
	  Normal  Starting                 5m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet          Node addons-819803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet          Node addons-819803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m58s)  kubelet          Node addons-819803 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m52s                  kubelet          Node addons-819803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s                  kubelet          Node addons-819803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s                  kubelet          Node addons-819803 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m51s                  kubelet          Node addons-819803 status is now: NodeReady
	  Normal  RegisteredNode           5m48s                  node-controller  Node addons-819803 event: Registered Node addons-819803 in Controller
	
	
	==> dmesg <==
	[  +0.185798] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.054874] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.020576] kauditd_printk_skb: 151 callbacks suppressed
	[  +7.562173] kauditd_printk_skb: 68 callbacks suppressed
	[Oct30 18:23] kauditd_printk_skb: 2 callbacks suppressed
	[Oct30 18:24] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.072000] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.013814] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.006199] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.634532] kauditd_printk_skb: 43 callbacks suppressed
	[Oct30 18:25] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.654005] kauditd_printk_skb: 9 callbacks suppressed
	[ +18.771720] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.535362] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.312718] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.474003] kauditd_printk_skb: 20 callbacks suppressed
	[Oct30 18:26] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.908246] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.193428] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.279822] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.724164] kauditd_printk_skb: 6 callbacks suppressed
	[ +14.848615] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.862857] kauditd_printk_skb: 7 callbacks suppressed
	[Oct30 18:27] kauditd_printk_skb: 49 callbacks suppressed
	[Oct30 18:28] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb] <==
	{"level":"info","ts":"2024-10-30T18:24:36.862254Z","caller":"traceutil/trace.go:171","msg":"trace[2114230265] linearizableReadLoop","detail":"{readStateIndex:1201; appliedIndex:1200; }","duration":"285.484408ms","start":"2024-10-30T18:24:36.576757Z","end":"2024-10-30T18:24:36.862241Z","steps":["trace[2114230265] 'read index received'  (duration: 285.287877ms)","trace[2114230265] 'applied index is now lower than readState.Index'  (duration: 194.427µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-30T18:24:36.862616Z","caller":"traceutil/trace.go:171","msg":"trace[685996133] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"411.616884ms","start":"2024-10-30T18:24:36.450988Z","end":"2024-10-30T18:24:36.862605Z","steps":["trace[685996133] 'process raft request'  (duration: 411.097015ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:24:36.862744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.881505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:24:36.862786Z","caller":"traceutil/trace.go:171","msg":"trace[666217286] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"206.919387ms","start":"2024-10-30T18:24:36.655858Z","end":"2024-10-30T18:24:36.862778Z","steps":["trace[666217286] 'agreement among raft nodes before linearized reading'  (duration: 206.872113ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:24:36.862862Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T18:24:36.450974Z","time spent":"411.755285ms","remote":"127.0.0.1:52196","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":844,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-s2tw8.18034e1ab96fa504\" mod_revision:928 > success:<request_put:<key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-s2tw8.18034e1ab96fa504\" value_size:744 lease:2079980087949946493 >> failure:<request_range:<key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-s2tw8.18034e1ab96fa504\" > >"}
	{"level":"warn","ts":"2024-10-30T18:24:36.862924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.192764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:24:36.862960Z","caller":"traceutil/trace.go:171","msg":"trace[13747596] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"199.227938ms","start":"2024-10-30T18:24:36.663726Z","end":"2024-10-30T18:24:36.862954Z","steps":["trace[13747596] 'agreement among raft nodes before linearized reading'  (duration: 199.183913ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:24:36.862698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.915829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:24:36.863081Z","caller":"traceutil/trace.go:171","msg":"trace[916201317] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"286.321038ms","start":"2024-10-30T18:24:36.576753Z","end":"2024-10-30T18:24:36.863074Z","steps":["trace[916201317] 'agreement among raft nodes before linearized reading'  (duration: 285.784948ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:25:14.126413Z","caller":"traceutil/trace.go:171","msg":"trace[1573345284] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"122.488475ms","start":"2024-10-30T18:25:14.003898Z","end":"2024-10-30T18:25:14.126387Z","steps":["trace[1573345284] 'process raft request'  (duration: 122.401753ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:25:49.128483Z","caller":"traceutil/trace.go:171","msg":"trace[1088917934] transaction","detail":"{read_only:false; response_revision:1416; number_of_response:1; }","duration":"254.328792ms","start":"2024-10-30T18:25:48.874129Z","end":"2024-10-30T18:25:49.128458Z","steps":["trace[1088917934] 'process raft request'  (duration: 254.240768ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:26:25.545344Z","caller":"traceutil/trace.go:171","msg":"trace[1172046373] transaction","detail":"{read_only:false; response_revision:1651; number_of_response:1; }","duration":"343.457009ms","start":"2024-10-30T18:26:25.201863Z","end":"2024-10-30T18:26:25.545320Z","steps":["trace[1172046373] 'process raft request'  (duration: 343.328222ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:25.545686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T18:26:25.201848Z","time spent":"343.672014ms","remote":"127.0.0.1:52384","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1645 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-30T18:26:25.546088Z","caller":"traceutil/trace.go:171","msg":"trace[1067557042] linearizableReadLoop","detail":"{readStateIndex:1725; appliedIndex:1725; }","duration":"198.691834ms","start":"2024-10-30T18:26:25.347378Z","end":"2024-10-30T18:26:25.546069Z","steps":["trace[1067557042] 'read index received'  (duration: 198.689653ms)","trace[1067557042] 'applied index is now lower than readState.Index'  (duration: 1.712µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-30T18:26:25.547301Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.909199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:26:25.547409Z","caller":"traceutil/trace.go:171","msg":"trace[971830471] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1651; }","duration":"200.020443ms","start":"2024-10-30T18:26:25.347374Z","end":"2024-10-30T18:26:25.547394Z","steps":["trace[971830471] 'agreement among raft nodes before linearized reading'  (duration: 199.881341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:25.553031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.999427ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:26:25.553084Z","caller":"traceutil/trace.go:171","msg":"trace[634688648] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1652; }","duration":"145.063243ms","start":"2024-10-30T18:26:25.408012Z","end":"2024-10-30T18:26:25.553075Z","steps":["trace[634688648] 'agreement among raft nodes before linearized reading'  (duration: 144.971173ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:26:25.553408Z","caller":"traceutil/trace.go:171","msg":"trace[1537023752] transaction","detail":"{read_only:false; response_revision:1652; number_of_response:1; }","duration":"110.018954ms","start":"2024-10-30T18:26:25.443379Z","end":"2024-10-30T18:26:25.553398Z","steps":["trace[1537023752] 'process raft request'  (duration: 109.519049ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:26:56.202570Z","caller":"traceutil/trace.go:171","msg":"trace[1987309311] linearizableReadLoop","detail":"{readStateIndex:1874; appliedIndex:1873; }","duration":"218.528743ms","start":"2024-10-30T18:26:55.984020Z","end":"2024-10-30T18:26:56.202548Z","steps":["trace[1987309311] 'read index received'  (duration: 215.098744ms)","trace[1987309311] 'applied index is now lower than readState.Index'  (duration: 3.42894ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-30T18:26:56.202775Z","caller":"traceutil/trace.go:171","msg":"trace[1355177454] transaction","detail":"{read_only:false; response_revision:1792; number_of_response:1; }","duration":"257.231128ms","start":"2024-10-30T18:26:55.945535Z","end":"2024-10-30T18:26:56.202766Z","steps":["trace[1355177454] 'process raft request'  (duration: 253.67595ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:56.202955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.926139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:26:56.202993Z","caller":"traceutil/trace.go:171","msg":"trace[1030047382] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1792; }","duration":"218.994955ms","start":"2024-10-30T18:26:55.983992Z","end":"2024-10-30T18:26:56.202987Z","steps":["trace[1030047382] 'agreement among raft nodes before linearized reading'  (duration: 218.912831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:56.203190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.334501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" ","response":"range_response_count:1 size:1698"}
	{"level":"info","ts":"2024-10-30T18:26:56.203227Z","caller":"traceutil/trace.go:171","msg":"trace[231693263] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1792; }","duration":"126.429466ms","start":"2024-10-30T18:26:56.076792Z","end":"2024-10-30T18:26:56.203221Z","steps":["trace[231693263] 'agreement among raft nodes before linearized reading'  (duration: 126.281409ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:28:22 up 6 min,  0 users,  load average: 0.52, 1.18, 0.67
	Linux addons-819803 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b] <==
	E1030 18:24:41.395695       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.7.211:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.7.211:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.7.211:443: connect: connection refused" logger="UnhandledError"
	I1030 18:24:41.467843       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1030 18:25:31.810771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:51372: use of closed network connection
	E1030 18:25:32.054856       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:51412: use of closed network connection
	I1030 18:25:41.257654       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.228.34"}
	I1030 18:25:52.928553       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1030 18:25:53.966628       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1030 18:25:58.419383       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1030 18:25:58.584375       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.85.39"}
	I1030 18:26:33.409720       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1030 18:26:47.590784       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1030 18:26:56.810697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.810774       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:56.844847       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.845012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:56.882009       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.882069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:56.890526       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.891695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:57.001448       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:57.001520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1030 18:26:57.890836       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1030 18:26:58.004401       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1030 18:26:58.014103       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1030 18:28:21.270798       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.236.65"}
	
	
	==> kube-controller-manager [3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271] <==
	E1030 18:27:06.633380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1030 18:27:09.582782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="3.314µs"
	W1030 18:27:13.969068       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:27:13.969249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:27:14.495965       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:27:14.496080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:27:17.338464       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:27:17.338526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1030 18:27:19.930518       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W1030 18:27:32.974819       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:27:32.974876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:27:39.135743       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:27:39.135876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:27:40.807072       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:27:40.807173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:27:53.431967       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:27:53.432073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:28:07.649558       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:28:07.649610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:28:11.107808       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:28:11.107842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1030 18:28:21.077101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.638941ms"
	I1030 18:28:21.106274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="28.997979ms"
	I1030 18:28:21.106346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.941µs"
	I1030 18:28:21.108190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.313µs"
	
	
	==> kube-proxy [8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 18:22:37.146444       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 18:22:37.166670       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	E1030 18:22:37.166774       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 18:22:37.281324       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 18:22:37.281424       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 18:22:37.281459       1 server_linux.go:169] "Using iptables Proxier"
	I1030 18:22:37.283860       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 18:22:37.284181       1 server.go:483] "Version info" version="v1.31.2"
	I1030 18:22:37.284399       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 18:22:37.285532       1 config.go:199] "Starting service config controller"
	I1030 18:22:37.285569       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 18:22:37.285597       1 config.go:105] "Starting endpoint slice config controller"
	I1030 18:22:37.285601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 18:22:37.286087       1 config.go:328] "Starting node config controller"
	I1030 18:22:37.286100       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 18:22:37.387264       1 shared_informer.go:320] Caches are synced for node config
	I1030 18:22:37.387341       1 shared_informer.go:320] Caches are synced for service config
	I1030 18:22:37.387371       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac] <==
	W1030 18:22:27.802248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 18:22:27.802276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:27.802316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1030 18:22:27.802343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:27.802450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 18:22:27.803261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.678432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1030 18:22:28.678490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.726055       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1030 18:22:28.726203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.742669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1030 18:22:28.742772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.765397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1030 18:22:28.765501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.860238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 18:22:28.860289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.888181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 18:22:28.888279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.928590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 18:22:28.928646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.968766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1030 18:22:28.968820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:29.101911       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 18:22:29.101960       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1030 18:22:31.090200       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083858    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae3e815f-7258-4b40-a1db-dfd46db7197a" containerName="cloud-spanner-emulator"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083864    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="csi-provisioner"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083871    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b723440-2ccc-45fa-b457-c705dec6e7b5" containerName="local-path-provisioner"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083877    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="node-driver-registrar"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083885    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0ffdb47-736c-4a9f-a9b6-d99bf84b26cd" containerName="volume-snapshot-controller"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083890    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47448865-4a61-44a4-aed7-421ff4e7d130" containerName="task-pv-container"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083900    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="liveness-probe"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083907    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="042a6627-5f58-4a7c-8adc-393f4a23de62" containerName="csi-resizer"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083913    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="603a5497-a36a-4123-ad83-8159ef7c6494" containerName="csi-attacher"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083918    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="hostpath"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: E1030 18:28:21.083927    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="csi-snapshotter"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.083981    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="node-driver-registrar"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.083991    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="liveness-probe"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.083996    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="csi-snapshotter"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084003    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="47448865-4a61-44a4-aed7-421ff4e7d130" containerName="task-pv-container"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084009    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef57b7b-170b-4404-8af9-36d355a9be09" containerName="volume-snapshot-controller"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084014    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="csi-external-health-monitor-controller"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084019    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="hostpath"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084025    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="122041b3-674e-42ec-a5a8-ec4a2f43cbdf" containerName="csi-provisioner"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084034    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b723440-2ccc-45fa-b457-c705dec6e7b5" containerName="local-path-provisioner"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084044    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="042a6627-5f58-4a7c-8adc-393f4a23de62" containerName="csi-resizer"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084054    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ffdb47-736c-4a9f-a9b6-d99bf84b26cd" containerName="volume-snapshot-controller"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084067    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="603a5497-a36a-4123-ad83-8159ef7c6494" containerName="csi-attacher"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.084073    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3e815f-7258-4b40-a1db-dfd46db7197a" containerName="cloud-spanner-emulator"
	Oct 30 18:28:21 addons-819803 kubelet[1205]: I1030 18:28:21.159261    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zxqt\" (UniqueName: \"kubernetes.io/projected/a0d95ce8-668d-4d45-a042-299981601dff-kube-api-access-8zxqt\") pod \"hello-world-app-55bf9c44b4-srpgj\" (UID: \"a0d95ce8-668d-4d45-a042-299981601dff\") " pod="default/hello-world-app-55bf9c44b4-srpgj"
	
	
	==> storage-provisioner [1f46b0ca80854c9d1a66f9fca2789c69b6e2acd673897114ae279a484bcf1a86] <==
	I1030 18:22:43.328375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 18:22:43.448434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 18:22:43.448514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 18:22:43.523347       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 18:22:43.525048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-819803_034d30ff-96f9-417a-8e53-a0e7c92aa4b7!
	I1030 18:22:43.538012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36e5771e-0220-43a1-9ab6-cb578de568ee", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-819803_034d30ff-96f9-417a-8e53-a0e7c92aa4b7 became leader
	I1030 18:22:43.625480       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-819803_034d30ff-96f9-417a-8e53-a0e7c92aa4b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-819803 -n addons-819803
helpers_test.go:261: (dbg) Run:  kubectl --context addons-819803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-srpgj ingress-nginx-admission-create-hqldd ingress-nginx-admission-patch-5tqhz
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-819803 describe pod hello-world-app-55bf9c44b4-srpgj ingress-nginx-admission-create-hqldd ingress-nginx-admission-patch-5tqhz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-819803 describe pod hello-world-app-55bf9c44b4-srpgj ingress-nginx-admission-create-hqldd ingress-nginx-admission-patch-5tqhz: exit status 1 (66.106606ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-srpgj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-819803/192.168.39.211
	Start Time:       Wed, 30 Oct 2024 18:28:21 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zxqt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zxqt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-srpgj to addons-819803
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hqldd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5tqhz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-819803 describe pod hello-world-app-55bf9c44b4-srpgj ingress-nginx-admission-create-hqldd ingress-nginx-admission-patch-5tqhz: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 addons disable ingress-dns --alsologtostderr -v=1: (1.085247366s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 addons disable ingress --alsologtostderr -v=1: (7.852271557s)
--- FAIL: TestAddons/parallel/Ingress (154.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (364.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.204818ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-trqq2" [07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003709457s
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (82.1716ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 3m9.560912858s

                                                
                                                
** /stderr **
I1030 18:25:46.564118  389144 retry.go:31] will retry after 1.520500521s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (80.004377ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 3m11.163602143s

                                                
                                                
** /stderr **
I1030 18:25:48.165995  389144 retry.go:31] will retry after 4.501527022s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (78.358583ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 3m15.744672066s

                                                
                                                
** /stderr **
I1030 18:25:52.746869  389144 retry.go:31] will retry after 7.473519205s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (104.484977ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 3m23.32278719s

                                                
                                                
** /stderr **
I1030 18:26:00.326000  389144 retry.go:31] will retry after 11.019857119s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (123.879428ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 3m34.468128809s

                                                
                                                
** /stderr **
I1030 18:26:11.470451  389144 retry.go:31] will retry after 18.649179601s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (66.92514ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 3m53.185326997s

                                                
                                                
** /stderr **
I1030 18:26:30.187513  389144 retry.go:31] will retry after 20.725817185s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (66.458591ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 4m13.978551208s

                                                
                                                
** /stderr **
I1030 18:26:50.980915  389144 retry.go:31] will retry after 50.641831251s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (67.243252ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 5m4.690445799s

                                                
                                                
** /stderr **
I1030 18:27:41.692729  389144 retry.go:31] will retry after 1m1.938966477s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (65.876758ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 6m6.696093659s

                                                
                                                
** /stderr **
I1030 18:28:43.698598  389144 retry.go:31] will retry after 1m25.12810776s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (63.381625ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 7m31.888036139s

                                                
                                                
** /stderr **
I1030 18:30:08.890470  389144 retry.go:31] will retry after 38.003090958s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (65.879911ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 8m9.957911012s

                                                
                                                
** /stderr **
I1030 18:30:46.960700  389144 retry.go:31] will retry after 55.029242936s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-819803 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-819803 top pods -n kube-system: exit status 1 (66.268134ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-sdqnr, age: 9m5.057124263s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-819803 -n addons-819803
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 logs -n 25: (1.215442925s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-765166                                                                     | download-only-765166 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| delete  | -p download-only-293078                                                                     | download-only-293078 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-605542 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC |                     |
	|         | binary-mirror-605542                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43099                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-605542                                                                     | binary-mirror-605542 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC |                     |
	|         | addons-819803                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC |                     |
	|         | addons-819803                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-819803 --wait=true                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:25 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | -p addons-819803                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:26 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-819803 ip                                                                            | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:25 UTC |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:25 UTC | 30 Oct 24 18:26 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:26 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-819803 ssh curl -s                                                                   | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-819803 ssh cat                                                                       | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:26 UTC |
	|         | /opt/local-path-provisioner/pvc-bc29ddce-63c6-4328-8e8f-fb3484c4de83_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:26 UTC | 30 Oct 24 18:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-819803 addons                                                                        | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:27 UTC | 30 Oct 24 18:27 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-819803 ip                                                                            | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:28 UTC | 30 Oct 24 18:28 UTC |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:28 UTC | 30 Oct 24 18:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-819803 addons disable                                                                | addons-819803        | jenkins | v1.34.0 | 30 Oct 24 18:28 UTC | 30 Oct 24 18:28 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:21:46
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:21:46.377146  389930 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:21:46.377251  389930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:21:46.377259  389930 out.go:358] Setting ErrFile to fd 2...
	I1030 18:21:46.377263  389930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:21:46.377433  389930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:21:46.378040  389930 out.go:352] Setting JSON to false
	I1030 18:21:46.378963  389930 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7449,"bootTime":1730305057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:21:46.379079  389930 start.go:139] virtualization: kvm guest
	I1030 18:21:46.381456  389930 out.go:177] * [addons-819803] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:21:46.382850  389930 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:21:46.382858  389930 notify.go:220] Checking for updates...
	I1030 18:21:46.385485  389930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:21:46.387091  389930 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:21:46.388369  389930 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:21:46.389574  389930 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:21:46.390796  389930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:21:46.392083  389930 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:21:46.423263  389930 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 18:21:46.424520  389930 start.go:297] selected driver: kvm2
	I1030 18:21:46.424533  389930 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:21:46.424547  389930 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:21:46.425307  389930 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:21:46.425405  389930 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:21:46.439927  389930 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:21:46.439984  389930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:21:46.440231  389930 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:21:46.440268  389930 cni.go:84] Creating CNI manager for ""
	I1030 18:21:46.440323  389930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:21:46.440334  389930 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 18:21:46.440388  389930 start.go:340] cluster config:
	{Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:21:46.440499  389930 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:21:46.442337  389930 out.go:177] * Starting "addons-819803" primary control-plane node in "addons-819803" cluster
	I1030 18:21:46.443613  389930 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:21:46.443648  389930 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:21:46.443660  389930 cache.go:56] Caching tarball of preloaded images
	I1030 18:21:46.443734  389930 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:21:46.443745  389930 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:21:46.444053  389930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/config.json ...
	I1030 18:21:46.444078  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/config.json: {Name:mk55690a6762df711e62dd40075acaa4a8fe5327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:21:46.444222  389930 start.go:360] acquireMachinesLock for addons-819803: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:21:46.444287  389930 start.go:364] duration metric: took 48.42µs to acquireMachinesLock for "addons-819803"
	I1030 18:21:46.444311  389930 start.go:93] Provisioning new machine with config: &{Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:21:46.444381  389930 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 18:21:46.446035  389930 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1030 18:21:46.446177  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:21:46.446229  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:21:46.460089  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I1030 18:21:46.460588  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:21:46.461261  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:21:46.461290  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:21:46.461637  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:21:46.461807  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:21:46.462004  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:21:46.462146  389930 start.go:159] libmachine.API.Create for "addons-819803" (driver="kvm2")
	I1030 18:21:46.462174  389930 client.go:168] LocalClient.Create starting
	I1030 18:21:46.462210  389930 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:21:46.523366  389930 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:21:46.675982  389930 main.go:141] libmachine: Running pre-create checks...
	I1030 18:21:46.676009  389930 main.go:141] libmachine: (addons-819803) Calling .PreCreateCheck
	I1030 18:21:46.676528  389930 main.go:141] libmachine: (addons-819803) Calling .GetConfigRaw
	I1030 18:21:46.677026  389930 main.go:141] libmachine: Creating machine...
	I1030 18:21:46.677042  389930 main.go:141] libmachine: (addons-819803) Calling .Create
	I1030 18:21:46.677217  389930 main.go:141] libmachine: (addons-819803) Creating KVM machine...
	I1030 18:21:46.678312  389930 main.go:141] libmachine: (addons-819803) DBG | found existing default KVM network
	I1030 18:21:46.679140  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:46.678984  389952 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I1030 18:21:46.679208  389930 main.go:141] libmachine: (addons-819803) DBG | created network xml: 
	I1030 18:21:46.679227  389930 main.go:141] libmachine: (addons-819803) DBG | <network>
	I1030 18:21:46.679237  389930 main.go:141] libmachine: (addons-819803) DBG |   <name>mk-addons-819803</name>
	I1030 18:21:46.679245  389930 main.go:141] libmachine: (addons-819803) DBG |   <dns enable='no'/>
	I1030 18:21:46.679256  389930 main.go:141] libmachine: (addons-819803) DBG |   
	I1030 18:21:46.679265  389930 main.go:141] libmachine: (addons-819803) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1030 18:21:46.679277  389930 main.go:141] libmachine: (addons-819803) DBG |     <dhcp>
	I1030 18:21:46.679286  389930 main.go:141] libmachine: (addons-819803) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1030 18:21:46.679309  389930 main.go:141] libmachine: (addons-819803) DBG |     </dhcp>
	I1030 18:21:46.679321  389930 main.go:141] libmachine: (addons-819803) DBG |   </ip>
	I1030 18:21:46.679327  389930 main.go:141] libmachine: (addons-819803) DBG |   
	I1030 18:21:46.679332  389930 main.go:141] libmachine: (addons-819803) DBG | </network>
	I1030 18:21:46.679338  389930 main.go:141] libmachine: (addons-819803) DBG | 
	I1030 18:21:46.685075  389930 main.go:141] libmachine: (addons-819803) DBG | trying to create private KVM network mk-addons-819803 192.168.39.0/24...
	I1030 18:21:46.748185  389930 main.go:141] libmachine: (addons-819803) DBG | private KVM network mk-addons-819803 192.168.39.0/24 created
	I1030 18:21:46.748222  389930 main.go:141] libmachine: (addons-819803) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803 ...
	I1030 18:21:46.748257  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:46.748149  389952 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:21:46.748282  389930 main.go:141] libmachine: (addons-819803) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:21:46.748314  389930 main.go:141] libmachine: (addons-819803) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:21:47.042122  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:47.041994  389952 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa...
	I1030 18:21:47.276920  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:47.276746  389952 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/addons-819803.rawdisk...
	I1030 18:21:47.276957  389930 main.go:141] libmachine: (addons-819803) DBG | Writing magic tar header
	I1030 18:21:47.276966  389930 main.go:141] libmachine: (addons-819803) DBG | Writing SSH key tar header
	I1030 18:21:47.276973  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:47.276871  389952 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803 ...
	I1030 18:21:47.276986  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803
	I1030 18:21:47.277038  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:21:47.277059  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803 (perms=drwx------)
	I1030 18:21:47.277066  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:21:47.277077  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:21:47.277085  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:21:47.277111  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:21:47.277121  389930 main.go:141] libmachine: (addons-819803) DBG | Checking permissions on dir: /home
	I1030 18:21:47.277129  389930 main.go:141] libmachine: (addons-819803) DBG | Skipping /home - not owner
	I1030 18:21:47.277159  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:21:47.277181  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:21:47.277224  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:21:47.277251  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:21:47.277271  389930 main.go:141] libmachine: (addons-819803) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:21:47.277290  389930 main.go:141] libmachine: (addons-819803) Creating domain...
	I1030 18:21:47.277973  389930 main.go:141] libmachine: (addons-819803) define libvirt domain using xml: 
	I1030 18:21:47.277991  389930 main.go:141] libmachine: (addons-819803) <domain type='kvm'>
	I1030 18:21:47.278000  389930 main.go:141] libmachine: (addons-819803)   <name>addons-819803</name>
	I1030 18:21:47.278008  389930 main.go:141] libmachine: (addons-819803)   <memory unit='MiB'>4000</memory>
	I1030 18:21:47.278029  389930 main.go:141] libmachine: (addons-819803)   <vcpu>2</vcpu>
	I1030 18:21:47.278040  389930 main.go:141] libmachine: (addons-819803)   <features>
	I1030 18:21:47.278049  389930 main.go:141] libmachine: (addons-819803)     <acpi/>
	I1030 18:21:47.278055  389930 main.go:141] libmachine: (addons-819803)     <apic/>
	I1030 18:21:47.278078  389930 main.go:141] libmachine: (addons-819803)     <pae/>
	I1030 18:21:47.278093  389930 main.go:141] libmachine: (addons-819803)     
	I1030 18:21:47.278126  389930 main.go:141] libmachine: (addons-819803)   </features>
	I1030 18:21:47.278145  389930 main.go:141] libmachine: (addons-819803)   <cpu mode='host-passthrough'>
	I1030 18:21:47.278152  389930 main.go:141] libmachine: (addons-819803)   
	I1030 18:21:47.278171  389930 main.go:141] libmachine: (addons-819803)   </cpu>
	I1030 18:21:47.278180  389930 main.go:141] libmachine: (addons-819803)   <os>
	I1030 18:21:47.278185  389930 main.go:141] libmachine: (addons-819803)     <type>hvm</type>
	I1030 18:21:47.278191  389930 main.go:141] libmachine: (addons-819803)     <boot dev='cdrom'/>
	I1030 18:21:47.278196  389930 main.go:141] libmachine: (addons-819803)     <boot dev='hd'/>
	I1030 18:21:47.278202  389930 main.go:141] libmachine: (addons-819803)     <bootmenu enable='no'/>
	I1030 18:21:47.278206  389930 main.go:141] libmachine: (addons-819803)   </os>
	I1030 18:21:47.278213  389930 main.go:141] libmachine: (addons-819803)   <devices>
	I1030 18:21:47.278223  389930 main.go:141] libmachine: (addons-819803)     <disk type='file' device='cdrom'>
	I1030 18:21:47.278233  389930 main.go:141] libmachine: (addons-819803)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/boot2docker.iso'/>
	I1030 18:21:47.278242  389930 main.go:141] libmachine: (addons-819803)       <target dev='hdc' bus='scsi'/>
	I1030 18:21:47.278251  389930 main.go:141] libmachine: (addons-819803)       <readonly/>
	I1030 18:21:47.278258  389930 main.go:141] libmachine: (addons-819803)     </disk>
	I1030 18:21:47.278263  389930 main.go:141] libmachine: (addons-819803)     <disk type='file' device='disk'>
	I1030 18:21:47.278271  389930 main.go:141] libmachine: (addons-819803)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:21:47.278279  389930 main.go:141] libmachine: (addons-819803)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/addons-819803.rawdisk'/>
	I1030 18:21:47.278286  389930 main.go:141] libmachine: (addons-819803)       <target dev='hda' bus='virtio'/>
	I1030 18:21:47.278306  389930 main.go:141] libmachine: (addons-819803)     </disk>
	I1030 18:21:47.278323  389930 main.go:141] libmachine: (addons-819803)     <interface type='network'>
	I1030 18:21:47.278335  389930 main.go:141] libmachine: (addons-819803)       <source network='mk-addons-819803'/>
	I1030 18:21:47.278346  389930 main.go:141] libmachine: (addons-819803)       <model type='virtio'/>
	I1030 18:21:47.278358  389930 main.go:141] libmachine: (addons-819803)     </interface>
	I1030 18:21:47.278368  389930 main.go:141] libmachine: (addons-819803)     <interface type='network'>
	I1030 18:21:47.278378  389930 main.go:141] libmachine: (addons-819803)       <source network='default'/>
	I1030 18:21:47.278388  389930 main.go:141] libmachine: (addons-819803)       <model type='virtio'/>
	I1030 18:21:47.278401  389930 main.go:141] libmachine: (addons-819803)     </interface>
	I1030 18:21:47.278415  389930 main.go:141] libmachine: (addons-819803)     <serial type='pty'>
	I1030 18:21:47.278427  389930 main.go:141] libmachine: (addons-819803)       <target port='0'/>
	I1030 18:21:47.278437  389930 main.go:141] libmachine: (addons-819803)     </serial>
	I1030 18:21:47.278454  389930 main.go:141] libmachine: (addons-819803)     <console type='pty'>
	I1030 18:21:47.278475  389930 main.go:141] libmachine: (addons-819803)       <target type='serial' port='0'/>
	I1030 18:21:47.278511  389930 main.go:141] libmachine: (addons-819803)     </console>
	I1030 18:21:47.278530  389930 main.go:141] libmachine: (addons-819803)     <rng model='virtio'>
	I1030 18:21:47.278543  389930 main.go:141] libmachine: (addons-819803)       <backend model='random'>/dev/random</backend>
	I1030 18:21:47.278558  389930 main.go:141] libmachine: (addons-819803)     </rng>
	I1030 18:21:47.278568  389930 main.go:141] libmachine: (addons-819803)     
	I1030 18:21:47.278583  389930 main.go:141] libmachine: (addons-819803)     
	I1030 18:21:47.278595  389930 main.go:141] libmachine: (addons-819803)   </devices>
	I1030 18:21:47.278601  389930 main.go:141] libmachine: (addons-819803) </domain>
	I1030 18:21:47.278608  389930 main.go:141] libmachine: (addons-819803) 
	I1030 18:21:47.284437  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:22:34:18 in network default
	I1030 18:21:47.284918  389930 main.go:141] libmachine: (addons-819803) Ensuring networks are active...
	I1030 18:21:47.284933  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:47.285572  389930 main.go:141] libmachine: (addons-819803) Ensuring network default is active
	I1030 18:21:47.285846  389930 main.go:141] libmachine: (addons-819803) Ensuring network mk-addons-819803 is active
	I1030 18:21:47.286306  389930 main.go:141] libmachine: (addons-819803) Getting domain xml...
	I1030 18:21:47.286972  389930 main.go:141] libmachine: (addons-819803) Creating domain...
	I1030 18:21:48.671057  389930 main.go:141] libmachine: (addons-819803) Waiting to get IP...
	I1030 18:21:48.671853  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:48.672339  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:48.672421  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:48.672347  389952 retry.go:31] will retry after 291.069623ms: waiting for machine to come up
	I1030 18:21:48.965081  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:48.965507  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:48.965537  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:48.965457  389952 retry.go:31] will retry after 354.585457ms: waiting for machine to come up
	I1030 18:21:49.322206  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:49.322606  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:49.322635  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:49.322558  389952 retry.go:31] will retry after 482.031018ms: waiting for machine to come up
	I1030 18:21:49.805727  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:49.806155  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:49.806184  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:49.806114  389952 retry.go:31] will retry after 603.123075ms: waiting for machine to come up
	I1030 18:21:50.411008  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:50.411349  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:50.411371  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:50.411301  389952 retry.go:31] will retry after 466.752397ms: waiting for machine to come up
	I1030 18:21:50.880001  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:50.880397  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:50.880441  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:50.880351  389952 retry.go:31] will retry after 619.924687ms: waiting for machine to come up
	I1030 18:21:51.501985  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:51.502439  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:51.502502  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:51.502372  389952 retry.go:31] will retry after 1.045044225s: waiting for machine to come up
	I1030 18:21:52.549198  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:52.549616  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:52.549644  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:52.549572  389952 retry.go:31] will retry after 1.370089219s: waiting for machine to come up
	I1030 18:21:53.922267  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:53.922659  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:53.922681  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:53.922639  389952 retry.go:31] will retry after 1.236302299s: waiting for machine to come up
	I1030 18:21:55.161330  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:55.161760  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:55.161789  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:55.161723  389952 retry.go:31] will retry after 2.307993642s: waiting for machine to come up
	I1030 18:21:57.471490  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:57.471917  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:57.471952  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:57.471858  389952 retry.go:31] will retry after 2.168747245s: waiting for machine to come up
	I1030 18:21:59.643105  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:21:59.643461  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:21:59.643490  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:21:59.643409  389952 retry.go:31] will retry after 2.480578318s: waiting for machine to come up
	I1030 18:22:02.125197  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:02.125611  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:22:02.125639  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:22:02.125545  389952 retry.go:31] will retry after 2.851771618s: waiting for machine to come up
	I1030 18:22:04.980556  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:04.980952  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find current IP address of domain addons-819803 in network mk-addons-819803
	I1030 18:22:04.980981  389930 main.go:141] libmachine: (addons-819803) DBG | I1030 18:22:04.980894  389952 retry.go:31] will retry after 4.668600476s: waiting for machine to come up
	I1030 18:22:09.653442  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:09.653962  389930 main.go:141] libmachine: (addons-819803) Found IP for machine: 192.168.39.211
	I1030 18:22:09.653987  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has current primary IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:09.653993  389930 main.go:141] libmachine: (addons-819803) Reserving static IP address...
	I1030 18:22:09.654337  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find host DHCP lease matching {name: "addons-819803", mac: "52:54:00:c8:a4:df", ip: "192.168.39.211"} in network mk-addons-819803
	I1030 18:22:09.725392  389930 main.go:141] libmachine: (addons-819803) DBG | Getting to WaitForSSH function...
	I1030 18:22:09.725426  389930 main.go:141] libmachine: (addons-819803) Reserved static IP address: 192.168.39.211
	I1030 18:22:09.725439  389930 main.go:141] libmachine: (addons-819803) Waiting for SSH to be available...
	I1030 18:22:09.727953  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:09.728252  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803
	I1030 18:22:09.728281  389930 main.go:141] libmachine: (addons-819803) DBG | unable to find defined IP address of network mk-addons-819803 interface with MAC address 52:54:00:c8:a4:df
	I1030 18:22:09.728397  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH client type: external
	I1030 18:22:09.728426  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa (-rw-------)
	I1030 18:22:09.728457  389930 main.go:141] libmachine: (addons-819803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:22:09.728471  389930 main.go:141] libmachine: (addons-819803) DBG | About to run SSH command:
	I1030 18:22:09.728483  389930 main.go:141] libmachine: (addons-819803) DBG | exit 0
	I1030 18:22:09.732017  389930 main.go:141] libmachine: (addons-819803) DBG | SSH cmd err, output: exit status 255: 
	I1030 18:22:09.732039  389930 main.go:141] libmachine: (addons-819803) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1030 18:22:09.732051  389930 main.go:141] libmachine: (addons-819803) DBG | command : exit 0
	I1030 18:22:09.732058  389930 main.go:141] libmachine: (addons-819803) DBG | err     : exit status 255
	I1030 18:22:09.732067  389930 main.go:141] libmachine: (addons-819803) DBG | output  : 
	I1030 18:22:12.732735  389930 main.go:141] libmachine: (addons-819803) DBG | Getting to WaitForSSH function...
	I1030 18:22:12.735243  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.735643  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:12.735669  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.735775  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH client type: external
	I1030 18:22:12.735800  389930 main.go:141] libmachine: (addons-819803) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa (-rw-------)
	I1030 18:22:12.736246  389930 main.go:141] libmachine: (addons-819803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:22:12.736272  389930 main.go:141] libmachine: (addons-819803) DBG | About to run SSH command:
	I1030 18:22:12.736286  389930 main.go:141] libmachine: (addons-819803) DBG | exit 0
	I1030 18:22:12.858596  389930 main.go:141] libmachine: (addons-819803) DBG | SSH cmd err, output: <nil>: 
	I1030 18:22:12.858865  389930 main.go:141] libmachine: (addons-819803) KVM machine creation complete!
	I1030 18:22:12.859186  389930 main.go:141] libmachine: (addons-819803) Calling .GetConfigRaw
	I1030 18:22:12.859816  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:12.860040  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:12.860205  389930 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:22:12.860220  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:12.861368  389930 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:22:12.861383  389930 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:22:12.861388  389930 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:22:12.861393  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:12.863559  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.863931  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:12.863973  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.864089  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:12.864251  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.864381  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.864476  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:12.864579  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:12.864814  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:12.864828  389930 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:22:12.965675  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:22:12.965698  389930 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:22:12.965706  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:12.968089  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.968420  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:12.968449  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:12.968568  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:12.968771  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.968900  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:12.968996  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:12.969102  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:12.969320  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:12.969341  389930 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:22:13.071004  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:22:13.071095  389930 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:22:13.071108  389930 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:22:13.071119  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:22:13.071379  389930 buildroot.go:166] provisioning hostname "addons-819803"
	I1030 18:22:13.071421  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:22:13.071609  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.074178  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.074540  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.074570  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.074705  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.074900  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.075046  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.075164  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.075284  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.075492  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.075507  389930 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-819803 && echo "addons-819803" | sudo tee /etc/hostname
	I1030 18:22:13.187982  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-819803
	
	I1030 18:22:13.188031  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.190507  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.190890  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.190928  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.191100  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.191282  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.191452  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.191571  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.191715  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.191885  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.191899  389930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-819803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-819803/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-819803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:22:13.303237  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:22:13.303273  389930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:22:13.303312  389930 buildroot.go:174] setting up certificates
	I1030 18:22:13.303326  389930 provision.go:84] configureAuth start
	I1030 18:22:13.303340  389930 main.go:141] libmachine: (addons-819803) Calling .GetMachineName
	I1030 18:22:13.303633  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:13.306026  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.306337  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.306357  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.306534  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.308382  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.308738  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.308756  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.308874  389930 provision.go:143] copyHostCerts
	I1030 18:22:13.308961  389930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:22:13.309139  389930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:22:13.309218  389930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:22:13.309285  389930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.addons-819803 san=[127.0.0.1 192.168.39.211 addons-819803 localhost minikube]
	I1030 18:22:13.496268  389930 provision.go:177] copyRemoteCerts
	I1030 18:22:13.496353  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:22:13.496390  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.499024  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.499309  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.499342  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.499476  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.499644  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.499817  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.499930  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:13.580725  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:22:13.604274  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:22:13.626842  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 18:22:13.649577  389930 provision.go:87] duration metric: took 346.237404ms to configureAuth
	I1030 18:22:13.649603  389930 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:22:13.649785  389930 config.go:182] Loaded profile config "addons-819803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:22:13.649870  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.652722  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.653054  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.653079  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.653250  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.653443  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.653587  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.653712  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.653876  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.654043  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.654058  389930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:22:13.873355  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:22:13.873380  389930 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:22:13.873388  389930 main.go:141] libmachine: (addons-819803) Calling .GetURL
	I1030 18:22:13.874717  389930 main.go:141] libmachine: (addons-819803) DBG | Using libvirt version 6000000
	I1030 18:22:13.876865  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.877164  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.877195  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.877373  389930 main.go:141] libmachine: Docker is up and running!
	I1030 18:22:13.877386  389930 main.go:141] libmachine: Reticulating splines...
	I1030 18:22:13.877394  389930 client.go:171] duration metric: took 27.415210037s to LocalClient.Create
	I1030 18:22:13.877420  389930 start.go:167] duration metric: took 27.415274417s to libmachine.API.Create "addons-819803"
	I1030 18:22:13.877434  389930 start.go:293] postStartSetup for "addons-819803" (driver="kvm2")
	I1030 18:22:13.877451  389930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:22:13.877473  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:13.877703  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:22:13.877732  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.879805  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.880115  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.880135  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.880303  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.880475  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.880648  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.880796  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:13.961195  389930 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:22:13.965134  389930 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:22:13.965159  389930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:22:13.965250  389930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:22:13.965278  389930 start.go:296] duration metric: took 87.833483ms for postStartSetup
	I1030 18:22:13.965332  389930 main.go:141] libmachine: (addons-819803) Calling .GetConfigRaw
	I1030 18:22:13.965897  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:13.968361  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.968649  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.968685  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.968910  389930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/config.json ...
	I1030 18:22:13.969086  389930 start.go:128] duration metric: took 27.524693623s to createHost
	I1030 18:22:13.969113  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:13.971111  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.971374  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:13.971401  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:13.971537  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:13.971729  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.971876  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:13.972026  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:13.972170  389930 main.go:141] libmachine: Using SSH client type: native
	I1030 18:22:13.972335  389930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1030 18:22:13.972351  389930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:22:14.075274  389930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730312534.054770540
	
	I1030 18:22:14.075302  389930 fix.go:216] guest clock: 1730312534.054770540
	I1030 18:22:14.075310  389930 fix.go:229] Guest: 2024-10-30 18:22:14.05477054 +0000 UTC Remote: 2024-10-30 18:22:13.969098834 +0000 UTC m=+27.629342568 (delta=85.671706ms)
	I1030 18:22:14.075349  389930 fix.go:200] guest clock delta is within tolerance: 85.671706ms
	I1030 18:22:14.075355  389930 start.go:83] releasing machines lock for "addons-819803", held for 27.631058158s
	I1030 18:22:14.075375  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.075687  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:14.077973  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.078275  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:14.078307  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.078506  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.079025  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.079210  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:14.079317  389930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:22:14.079386  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:14.079433  389930 ssh_runner.go:195] Run: cat /version.json
	I1030 18:22:14.079459  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:14.081762  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.081780  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.082059  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:14.082087  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.082112  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:14.082133  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:14.082234  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:14.082396  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:14.082407  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:14.082600  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:14.082645  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:14.082789  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:14.082805  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:14.082918  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:14.183992  389930 ssh_runner.go:195] Run: systemctl --version
	I1030 18:22:14.189783  389930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:22:14.347846  389930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:22:14.353576  389930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:22:14.353651  389930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:22:14.372746  389930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:22:14.372775  389930 start.go:495] detecting cgroup driver to use...
	I1030 18:22:14.372850  389930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:22:14.392610  389930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:22:14.408830  389930 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:22:14.408885  389930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:22:14.423904  389930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:22:14.439462  389930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:22:14.581365  389930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:22:14.737305  389930 docker.go:233] disabling docker service ...
	I1030 18:22:14.737373  389930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:22:14.751615  389930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:22:14.764338  389930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:22:14.903828  389930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:22:15.034446  389930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:22:15.051942  389930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:22:15.069743  389930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:22:15.069811  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.080015  389930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:22:15.080076  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.090345  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.100700  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.110937  389930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:22:15.121495  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.131703  389930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.148129  389930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:22:15.158324  389930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:22:15.167752  389930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:22:15.167817  389930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:22:15.180379  389930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:22:15.189438  389930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:22:15.315918  389930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:22:15.406680  389930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:22:15.406771  389930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:22:15.412035  389930 start.go:563] Will wait 60s for crictl version
	I1030 18:22:15.412093  389930 ssh_runner.go:195] Run: which crictl
	I1030 18:22:15.415689  389930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:22:15.453281  389930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:22:15.453368  389930 ssh_runner.go:195] Run: crio --version
	I1030 18:22:15.481364  389930 ssh_runner.go:195] Run: crio --version
	I1030 18:22:15.510591  389930 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:22:15.511809  389930 main.go:141] libmachine: (addons-819803) Calling .GetIP
	I1030 18:22:15.513933  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:15.514259  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:15.514292  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:15.514468  389930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:22:15.518335  389930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:22:15.530311  389930 kubeadm.go:883] updating cluster {Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:22:15.530433  389930 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:22:15.530476  389930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:22:15.561495  389930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 18:22:15.561560  389930 ssh_runner.go:195] Run: which lz4
	I1030 18:22:15.565386  389930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 18:22:15.569388  389930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 18:22:15.569422  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 18:22:16.815091  389930 crio.go:462] duration metric: took 1.249736286s to copy over tarball
	I1030 18:22:16.815165  389930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 18:22:18.895499  389930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.080282089s)
	I1030 18:22:18.895540  389930 crio.go:469] duration metric: took 2.080418147s to extract the tarball
	I1030 18:22:18.895550  389930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 18:22:18.934730  389930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:22:18.976819  389930 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:22:18.976846  389930 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:22:18.976854  389930 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1030 18:22:18.976961  389930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-819803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:22:18.977032  389930 ssh_runner.go:195] Run: crio config
	I1030 18:22:19.022630  389930 cni.go:84] Creating CNI manager for ""
	I1030 18:22:19.022657  389930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:22:19.022669  389930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:22:19.022692  389930 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-819803 NodeName:addons-819803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:22:19.022831  389930 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-819803"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:22:19.022894  389930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:22:19.033139  389930 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:22:19.033217  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 18:22:19.042777  389930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1030 18:22:19.059625  389930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:22:19.076398  389930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1030 18:22:19.093021  389930 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1030 18:22:19.096821  389930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:22:19.109239  389930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:22:19.241397  389930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:22:19.258667  389930 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803 for IP: 192.168.39.211
	I1030 18:22:19.258692  389930 certs.go:194] generating shared ca certs ...
	I1030 18:22:19.258759  389930 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.258916  389930 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:22:19.421313  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt ...
	I1030 18:22:19.421346  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt: {Name:mke1baa90fdf9d472688c9dce1a8cbdb9429180e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.421528  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key ...
	I1030 18:22:19.421545  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key: {Name:mk39960ca0f7a604b923049b394a9dd190b5c799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.421651  389930 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:22:19.800363  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt ...
	I1030 18:22:19.800397  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt: {Name:mka82047fcbc281c8dafed47ca47ee10ed435e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.800557  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key ...
	I1030 18:22:19.800568  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key: {Name:mke52f05795eacc13cef93d9a2f97c8ed2e5e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:19.800670  389930 certs.go:256] generating profile certs ...
	I1030 18:22:19.800748  389930 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.key
	I1030 18:22:19.800775  389930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt with IP's: []
	I1030 18:22:20.012094  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt ...
	I1030 18:22:20.012131  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: {Name:mk3e1026a414d0eb9a393c91985864dd02c29ca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.012311  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.key ...
	I1030 18:22:20.012322  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.key: {Name:mkd713bb8408388ccf35cdd7458b0248691df4e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.012388  389930 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d
	I1030 18:22:20.012407  389930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I1030 18:22:20.357052  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d ...
	I1030 18:22:20.357087  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d: {Name:mk90611d055450c0bc560328b67b2a4f1f1d82a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.357281  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d ...
	I1030 18:22:20.357300  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d: {Name:mkd39c4a1d2503f9ae6c571127f659970cb32617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.357397  389930 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt.9be9c35d -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt
	I1030 18:22:20.357473  389930 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key.9be9c35d -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key
	I1030 18:22:20.357523  389930 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key
	I1030 18:22:20.357541  389930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt with IP's: []
	I1030 18:22:20.480676  389930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt ...
	I1030 18:22:20.480707  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt: {Name:mk9b3153de6421c1963e00f41cbac3c9cb610755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.480887  389930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key ...
	I1030 18:22:20.480905  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key: {Name:mk49d90387976d01cb4e13a1c6fccd22f8262080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:20.481118  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:22:20.481155  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:22:20.481180  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:22:20.481204  389930 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:22:20.481879  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:22:20.511951  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:22:20.537422  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:22:20.561582  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:22:20.585219  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1030 18:22:20.608875  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:22:20.632409  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:22:20.655836  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 18:22:20.679169  389930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:22:20.702822  389930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:22:20.719706  389930 ssh_runner.go:195] Run: openssl version
	I1030 18:22:20.725585  389930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:22:20.736618  389930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:22:20.741441  389930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:22:20.741549  389930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:22:20.748232  389930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:22:20.758668  389930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:22:20.762652  389930 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:22:20.762703  389930 kubeadm.go:392] StartCluster: {Name:addons-819803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-819803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:22:20.762779  389930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:22:20.762821  389930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:22:20.796772  389930 cri.go:89] found id: ""
	I1030 18:22:20.796848  389930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 18:22:20.806869  389930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 18:22:20.816229  389930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 18:22:20.829281  389930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 18:22:20.829307  389930 kubeadm.go:157] found existing configuration files:
	
	I1030 18:22:20.829362  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 18:22:20.838753  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 18:22:20.838816  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 18:22:20.849914  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 18:22:20.862499  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 18:22:20.862581  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 18:22:20.874201  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 18:22:20.885993  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 18:22:20.886066  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 18:22:20.900423  389930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 18:22:20.909291  389930 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 18:22:20.909350  389930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 18:22:20.918408  389930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 18:22:21.073373  389930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 18:22:31.121385  389930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 18:22:31.121483  389930 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 18:22:31.121581  389930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 18:22:31.121714  389930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 18:22:31.121794  389930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 18:22:31.121888  389930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 18:22:31.123478  389930 out.go:235]   - Generating certificates and keys ...
	I1030 18:22:31.123546  389930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 18:22:31.123600  389930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 18:22:31.123665  389930 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 18:22:31.123729  389930 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 18:22:31.123824  389930 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 18:22:31.123900  389930 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 18:22:31.123953  389930 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 18:22:31.124113  389930 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-819803 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1030 18:22:31.124190  389930 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 18:22:31.124330  389930 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-819803 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1030 18:22:31.124386  389930 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 18:22:31.124490  389930 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 18:22:31.124565  389930 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 18:22:31.124656  389930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 18:22:31.124737  389930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 18:22:31.124821  389930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 18:22:31.124868  389930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 18:22:31.124921  389930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 18:22:31.124988  389930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 18:22:31.125065  389930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 18:22:31.125139  389930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 18:22:31.126918  389930 out.go:235]   - Booting up control plane ...
	I1030 18:22:31.127007  389930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 18:22:31.127096  389930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 18:22:31.127158  389930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 18:22:31.127263  389930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 18:22:31.127350  389930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 18:22:31.127389  389930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 18:22:31.127503  389930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 18:22:31.127588  389930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 18:22:31.127645  389930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.206883ms
	I1030 18:22:31.127707  389930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 18:22:31.127760  389930 kubeadm.go:310] [api-check] The API server is healthy after 5.50200891s
	I1030 18:22:31.127873  389930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 18:22:31.128002  389930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 18:22:31.128071  389930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 18:22:31.128233  389930 kubeadm.go:310] [mark-control-plane] Marking the node addons-819803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 18:22:31.128290  389930 kubeadm.go:310] [bootstrap-token] Using token: g3koph.ks9ytu5c0ykdojb9
	I1030 18:22:31.129703  389930 out.go:235]   - Configuring RBAC rules ...
	I1030 18:22:31.129790  389930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 18:22:31.129859  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 18:22:31.130010  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 18:22:31.130186  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 18:22:31.130322  389930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 18:22:31.130396  389930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 18:22:31.130561  389930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 18:22:31.130604  389930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 18:22:31.130651  389930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 18:22:31.130664  389930 kubeadm.go:310] 
	I1030 18:22:31.130718  389930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 18:22:31.130724  389930 kubeadm.go:310] 
	I1030 18:22:31.130794  389930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 18:22:31.130800  389930 kubeadm.go:310] 
	I1030 18:22:31.130824  389930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 18:22:31.130879  389930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 18:22:31.130922  389930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 18:22:31.130927  389930 kubeadm.go:310] 
	I1030 18:22:31.130977  389930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 18:22:31.130986  389930 kubeadm.go:310] 
	I1030 18:22:31.131035  389930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 18:22:31.131044  389930 kubeadm.go:310] 
	I1030 18:22:31.131091  389930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 18:22:31.131153  389930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 18:22:31.131216  389930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 18:22:31.131223  389930 kubeadm.go:310] 
	I1030 18:22:31.131297  389930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 18:22:31.131366  389930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 18:22:31.131372  389930 kubeadm.go:310] 
	I1030 18:22:31.131442  389930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g3koph.ks9ytu5c0ykdojb9 \
	I1030 18:22:31.131544  389930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 18:22:31.131566  389930 kubeadm.go:310] 	--control-plane 
	I1030 18:22:31.131575  389930 kubeadm.go:310] 
	I1030 18:22:31.131703  389930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 18:22:31.131717  389930 kubeadm.go:310] 
	I1030 18:22:31.131835  389930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g3koph.ks9ytu5c0ykdojb9 \
	I1030 18:22:31.131981  389930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 18:22:31.131994  389930 cni.go:84] Creating CNI manager for ""
	I1030 18:22:31.132001  389930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:22:31.133334  389930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 18:22:31.134658  389930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 18:22:31.149352  389930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 18:22:31.174669  389930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 18:22:31.174794  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:31.174830  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-819803 minikube.k8s.io/updated_at=2024_10_30T18_22_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=addons-819803 minikube.k8s.io/primary=true
	I1030 18:22:31.204031  389930 ops.go:34] apiserver oom_adj: -16
	I1030 18:22:31.324787  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:31.825803  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:32.325287  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:32.825519  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:33.324911  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:33.824938  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:34.325322  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:34.825825  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:35.325760  389930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:22:35.433263  389930 kubeadm.go:1113] duration metric: took 4.258529499s to wait for elevateKubeSystemPrivileges
	I1030 18:22:35.433313  389930 kubeadm.go:394] duration metric: took 14.670614783s to StartCluster
	I1030 18:22:35.433340  389930 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:35.433493  389930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:22:35.434032  389930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:22:35.434256  389930 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 18:22:35.434301  389930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:22:35.434334  389930 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1030 18:22:35.434463  389930 addons.go:69] Setting yakd=true in profile "addons-819803"
	I1030 18:22:35.434481  389930 addons.go:69] Setting cloud-spanner=true in profile "addons-819803"
	I1030 18:22:35.434477  389930 addons.go:69] Setting metrics-server=true in profile "addons-819803"
	I1030 18:22:35.434502  389930 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-819803"
	I1030 18:22:35.434513  389930 addons.go:234] Setting addon yakd=true in "addons-819803"
	I1030 18:22:35.434518  389930 addons.go:234] Setting addon cloud-spanner=true in "addons-819803"
	I1030 18:22:35.434521  389930 addons.go:234] Setting addon metrics-server=true in "addons-819803"
	I1030 18:22:35.434524  389930 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-819803"
	I1030 18:22:35.434517  389930 addons.go:69] Setting ingress=true in profile "addons-819803"
	I1030 18:22:35.434548  389930 addons.go:234] Setting addon ingress=true in "addons-819803"
	I1030 18:22:35.434555  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434556  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434561  389930 addons.go:69] Setting ingress-dns=true in profile "addons-819803"
	I1030 18:22:35.434561  389930 config.go:182] Loaded profile config "addons-819803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:22:35.434564  389930 addons.go:69] Setting default-storageclass=true in profile "addons-819803"
	I1030 18:22:35.434572  389930 addons.go:234] Setting addon ingress-dns=true in "addons-819803"
	I1030 18:22:35.434583  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434582  389930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-819803"
	I1030 18:22:35.434599  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434604  389930 addons.go:69] Setting storage-provisioner=true in profile "addons-819803"
	I1030 18:22:35.434615  389930 addons.go:234] Setting addon storage-provisioner=true in "addons-819803"
	I1030 18:22:35.434636  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434556  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.434693  389930 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-819803"
	I1030 18:22:35.434706  389930 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-819803"
	I1030 18:22:35.434968  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435002  389930 addons.go:69] Setting inspektor-gadget=true in profile "addons-819803"
	I1030 18:22:35.435008  389930 addons.go:69] Setting volcano=true in profile "addons-819803"
	I1030 18:22:35.435016  389930 addons.go:234] Setting addon inspektor-gadget=true in "addons-819803"
	I1030 18:22:35.435022  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435028  389930 addons.go:69] Setting volumesnapshots=true in profile "addons-819803"
	I1030 18:22:35.435035  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435036  389930 addons.go:234] Setting addon volumesnapshots=true in "addons-819803"
	I1030 18:22:35.435042  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435050  389930 addons.go:69] Setting gcp-auth=true in profile "addons-819803"
	I1030 18:22:35.435053  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435052  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435064  389930 addons.go:69] Setting registry=true in profile "addons-819803"
	I1030 18:22:35.435072  389930 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-819803"
	I1030 18:22:35.435078  389930 addons.go:234] Setting addon registry=true in "addons-819803"
	I1030 18:22:35.435085  389930 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-819803"
	I1030 18:22:35.435088  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435102  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435104  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435270  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435277  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435052  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435301  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435304  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435045  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435024  389930 addons.go:234] Setting addon volcano=true in "addons-819803"
	I1030 18:22:35.435065  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.434461  389930 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-819803"
	I1030 18:22:35.435421  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435066  389930 mustload.go:65] Loading cluster: addons-819803
	I1030 18:22:35.435446  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435451  389930 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-819803"
	I1030 18:22:35.435453  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435470  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435474  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435489  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435074  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435427  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435522  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435529  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435371  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435643  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435672  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435011  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435842  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.434556  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.435874  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.435917  389930 config.go:182] Loaded profile config "addons-819803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:22:35.435932  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.435951  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.436140  389930 out.go:177] * Verifying Kubernetes components...
	I1030 18:22:35.437892  389930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:22:35.451309  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I1030 18:22:35.451368  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1030 18:22:35.453601  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46125
	I1030 18:22:35.462832  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.462887  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.462972  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.463001  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.463551  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.463686  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.463753  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.464363  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.464385  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.464558  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.464571  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.464703  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.464717  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.464781  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.465247  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.465357  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.465408  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.465940  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.465971  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.466133  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.466738  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.466777  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.485973  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I1030 18:22:35.486582  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.487276  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.487297  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.487704  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.488285  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.488313  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.492645  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42839
	I1030 18:22:35.493108  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.493776  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.493794  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.494203  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.494763  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.494810  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.498141  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I1030 18:22:35.498749  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.498908  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I1030 18:22:35.499059  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1030 18:22:35.499493  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.499511  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.499947  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.499962  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.500031  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.500237  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.501018  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.501037  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.501227  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.501243  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.501514  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I1030 18:22:35.501593  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
	I1030 18:22:35.501620  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.502176  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.502201  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.502258  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.502299  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.502616  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.502997  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.503071  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.503087  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.503139  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.503152  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.503748  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.503812  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.504373  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.504413  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.504715  389930 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-819803"
	I1030 18:22:35.504766  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.505142  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.505163  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.505226  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.505258  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.508394  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.508760  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.508802  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.509523  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I1030 18:22:35.518589  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I1030 18:22:35.518805  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.519262  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.519843  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.519861  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.520269  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.520841  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.520886  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.521564  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.521582  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.522378  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.522899  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.525649  389930 addons.go:234] Setting addon default-storageclass=true in "addons-819803"
	I1030 18:22:35.525695  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:35.526069  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.526104  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.530233  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1030 18:22:35.532361  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
	I1030 18:22:35.532490  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I1030 18:22:35.532715  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.533144  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.533729  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.533748  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.533980  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1030 18:22:35.534311  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.534415  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35937
	I1030 18:22:35.534935  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.535082  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.535093  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.535591  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.535609  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.535669  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.536059  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.536109  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.536296  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.537061  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.538333  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.538380  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.538606  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.538959  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.538979  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.539176  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I1030 18:22:35.539402  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.539786  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.540242  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.540276  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.540577  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.540591  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.540620  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.541079  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.541202  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.541225  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.541275  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.541702  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.541728  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.542298  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.542833  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1030 18:22:35.543350  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.544084  389930 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1030 18:22:35.544122  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1030 18:22:35.544491  389930 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1030 18:22:35.544516  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.544558  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.544876  389930 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1030 18:22:35.545701  389930 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1030 18:22:35.545719  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1030 18:22:35.545737  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.546557  389930 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1030 18:22:35.546577  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1030 18:22:35.546650  389930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 18:22:35.546855  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.548517  389930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:22:35.548536  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 18:22:35.548554  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.548814  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.550798  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.550834  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.552339  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I1030 18:22:35.552540  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.552920  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.552988  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553045  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I1030 18:22:35.553192  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.553213  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553691  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.553714  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553739  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.553773  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.553784  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.553803  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.553862  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.554360  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.554386  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.554459  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.554515  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.554556  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.554787  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.554791  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.554840  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.554991  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I1030 18:22:35.555005  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.555072  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.555532  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.555550  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.555584  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.555688  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.555731  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.555876  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.555949  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.556023  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.556067  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.556194  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.556310  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.556682  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.557573  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.557589  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.558162  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.558395  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.559021  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.559062  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.559868  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.559907  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.560163  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.562510  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
	I1030 18:22:35.562878  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.563668  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.563689  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.563805  389930 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1030 18:22:35.564280  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.566609  389930 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1030 18:22:35.566630  389930 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1030 18:22:35.566652  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.566780  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41045
	I1030 18:22:35.566908  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I1030 18:22:35.566978  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I1030 18:22:35.567356  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.567443  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.567955  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.567977  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.567998  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.568403  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.568422  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.568551  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.568563  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.568904  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.568965  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.569619  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.569661  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.569878  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.569943  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38761
	I1030 18:22:35.570506  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:35.570540  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:35.570887  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.571446  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I1030 18:22:35.571722  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I1030 18:22:35.571841  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.572052  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.572240  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.572339  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.572489  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.572505  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.572561  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.572610  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.572625  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.572794  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.572860  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.573063  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.573403  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.573556  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.574081  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.574424  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.574535  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.574670  389930 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1030 18:22:35.575587  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.575615  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.576085  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.576203  389930 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1030 18:22:35.576226  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1030 18:22:35.576246  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.576334  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1030 18:22:35.576524  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.576541  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.576595  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.577638  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.577932  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.579131  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1030 18:22:35.579837  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.579870  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.580727  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.580750  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.580977  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.581160  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.581320  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1030 18:22:35.581331  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.581554  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.581601  389930 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1030 18:22:35.583082  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 18:22:35.583111  389930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 18:22:35.583132  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.583762  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.584400  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1030 18:22:35.585201  389930 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1030 18:22:35.585346  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1030 18:22:35.586143  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.586272  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I1030 18:22:35.586283  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1030 18:22:35.586400  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.586407  389930 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1030 18:22:35.586419  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1030 18:22:35.586437  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.587120  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.587161  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.587180  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.587196  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.587225  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.587259  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.587444  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.587635  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.587698  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.587727  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.587742  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.587871  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.587935  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.588372  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.588527  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1030 18:22:35.588746  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.590435  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.590635  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.590887  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.590946  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1030 18:22:35.592013  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1030 18:22:35.592078  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I1030 18:22:35.592145  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.592214  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.590924  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.593583  389930 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1030 18:22:35.593641  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.593599  389930 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1030 18:22:35.594311  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.594549  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.595018  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.595032  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1030 18:22:35.595063  389930 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1030 18:22:35.595069  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1030 18:22:35.595090  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1030 18:22:35.595102  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1030 18:22:35.595116  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.595090  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.596376  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.596393  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.597022  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.597191  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.597692  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1030 18:22:35.598868  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599162  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599207  389930 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1030 18:22:35.599227  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1030 18:22:35.599249  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.599271  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.599306  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599461  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.599650  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.599673  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.599689  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.599798  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.599861  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.599981  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.600025  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.600178  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.600320  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.600687  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.602326  389930 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1030 18:22:35.602593  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.602996  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.603031  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.603140  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.603309  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.603425  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.603563  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.604780  389930 out.go:177]   - Using image docker.io/registry:2.8.3
	I1030 18:22:35.606269  389930 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1030 18:22:35.606290  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1030 18:22:35.606304  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.610628  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.610650  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.610656  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.610659  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I1030 18:22:35.610674  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	W1030 18:22:35.610825  389930 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33980->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.610860  389930 retry.go:31] will retry after 253.092561ms: ssh: handshake failed: read tcp 192.168.39.1:33980->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.610831  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.610932  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I1030 18:22:35.611120  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.611254  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.611371  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.611495  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.611890  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.611918  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.612221  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.612251  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.612295  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.612465  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.612604  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.612767  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.614187  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.614508  389930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 18:22:35.614525  389930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 18:22:35.614543  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:35.614573  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.615191  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:35.615215  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:35.615456  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:35.615470  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:35.615478  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:35.615485  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:35.617278  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:35.617282  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:35.617286  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
	I1030 18:22:35.617298  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	W1030 18:22:35.617397  389930 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1030 18:22:35.617875  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:35.618411  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:35.618428  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:35.619404  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:35.619617  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:35.621173  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.621393  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:35.621605  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.621625  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.621810  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.621988  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.622118  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.622245  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.622876  389930 out.go:177]   - Using image docker.io/busybox:stable
	I1030 18:22:35.623966  389930 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1030 18:22:35.625274  389930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1030 18:22:35.625287  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1030 18:22:35.625302  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	W1030 18:22:35.626300  389930 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33994->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.626330  389930 retry.go:31] will retry after 374.534654ms: ssh: handshake failed: read tcp 192.168.39.1:33994->192.168.39.211:22: read: connection reset by peer
	I1030 18:22:35.627948  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.628263  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:35.628282  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:35.628425  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:35.628602  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:35.628727  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:35.628870  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:35.957970  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1030 18:22:36.004755  389930 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1030 18:22:36.004784  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1030 18:22:36.027108  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1030 18:22:36.038102  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1030 18:22:36.040156  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 18:22:36.040175  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1030 18:22:36.075024  389930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:22:36.075077  389930 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 18:22:36.098385  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1030 18:22:36.100387  389930 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1030 18:22:36.100405  389930 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1030 18:22:36.122900  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1030 18:22:36.122922  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1030 18:22:36.151720  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1030 18:22:36.153900  389930 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1030 18:22:36.153928  389930 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1030 18:22:36.164545  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1030 18:22:36.164566  389930 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1030 18:22:36.181052  389930 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1030 18:22:36.181075  389930 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1030 18:22:36.196331  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1030 18:22:36.202330  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:22:36.273990  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 18:22:36.274016  389930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 18:22:36.281266  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1030 18:22:36.281285  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1030 18:22:36.367552  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1030 18:22:36.376116  389930 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1030 18:22:36.376140  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1030 18:22:36.409552  389930 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1030 18:22:36.409585  389930 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1030 18:22:36.411121  389930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 18:22:36.411141  389930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 18:22:36.432216  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 18:22:36.504001  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1030 18:22:36.504035  389930 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1030 18:22:36.569445  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 18:22:36.579732  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1030 18:22:36.579758  389930 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1030 18:22:36.608843  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1030 18:22:36.608877  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1030 18:22:36.723100  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1030 18:22:36.723138  389930 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1030 18:22:36.728098  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1030 18:22:36.763069  389930 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 18:22:36.763095  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1030 18:22:36.899820  389930 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1030 18:22:36.899858  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1030 18:22:36.901580  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1030 18:22:36.901600  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1030 18:22:37.025047  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 18:22:37.098218  389930 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1030 18:22:37.098272  389930 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1030 18:22:37.120310  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1030 18:22:37.442933  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1030 18:22:37.442961  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1030 18:22:37.764559  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1030 18:22:37.764596  389930 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1030 18:22:37.939580  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.9815629s)
	I1030 18:22:37.939644  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:37.939654  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:37.940053  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:37.940101  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:37.940113  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:37.940185  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:37.940205  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:37.940632  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:37.940683  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:38.082795  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1030 18:22:38.082828  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1030 18:22:38.379668  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1030 18:22:38.379692  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1030 18:22:38.800575  389930 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1030 18:22:38.800609  389930 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1030 18:22:39.074620  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1030 18:22:39.168695  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.141549585s)
	I1030 18:22:39.168762  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:39.168779  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:39.169155  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:39.169179  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:39.169191  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:39.169206  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:39.169476  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:39.169505  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.412843  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.374694496s)
	I1030 18:22:40.412913  389930 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.337814371s)
	I1030 18:22:40.412852  389930 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.337789586s)
	I1030 18:22:40.412941  389930 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 18:22:40.412925  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413013  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413055  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.314639099s)
	I1030 18:22:40.413077  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.261328806s)
	I1030 18:22:40.413094  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413115  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413096  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413205  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413552  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.413569  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.413580  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413579  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.413588  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413589  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.413609  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.413615  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.413552  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.413672  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.413861  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.413873  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.414075  389930 node_ready.go:35] waiting up to 6m0s for node "addons-819803" to be "Ready" ...
	I1030 18:22:40.414111  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.414151  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.414158  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.414243  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.414252  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.414265  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.414272  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.414511  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.414556  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.414563  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.433985  389930 node_ready.go:49] node "addons-819803" has status "Ready":"True"
	I1030 18:22:40.434011  389930 node_ready.go:38] duration metric: took 19.91247ms for node "addons-819803" to be "Ready" ...
	I1030 18:22:40.434021  389930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:22:40.467116  389930 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:40.496238  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:40.496261  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:40.496595  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:40.496650  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:40.496664  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:40.954473  389930 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-819803" context rescaled to 1 replicas
	I1030 18:22:41.549413  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.353028673s)
	I1030 18:22:41.549442  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.347078515s)
	I1030 18:22:41.549468  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.549480  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.549491  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.549506  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.549850  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.549872  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:41.549872  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:41.549916  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.549931  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:41.549943  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.549972  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.549989  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:41.550013  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:41.550027  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:41.550245  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.550278  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:41.550294  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:41.550306  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:42.475696  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:42.590643  389930 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1030 18:22:42.590685  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:42.593365  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:42.593708  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:42.593743  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:42.593870  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:42.594095  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:42.594274  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:42.594420  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:43.072875  389930 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1030 18:22:43.252963  389930 addons.go:234] Setting addon gcp-auth=true in "addons-819803"
	I1030 18:22:43.253027  389930 host.go:66] Checking if "addons-819803" exists ...
	I1030 18:22:43.253346  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:43.253377  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:43.269120  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I1030 18:22:43.269658  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:43.270232  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:43.270252  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:43.270675  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:43.271270  389930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:22:43.271305  389930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:22:43.286031  389930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I1030 18:22:43.286520  389930 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:22:43.286965  389930 main.go:141] libmachine: Using API Version  1
	I1030 18:22:43.286986  389930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:22:43.287331  389930 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:22:43.287518  389930 main.go:141] libmachine: (addons-819803) Calling .GetState
	I1030 18:22:43.288995  389930 main.go:141] libmachine: (addons-819803) Calling .DriverName
	I1030 18:22:43.289223  389930 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1030 18:22:43.289251  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHHostname
	I1030 18:22:43.291926  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:43.292328  389930 main.go:141] libmachine: (addons-819803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a4:df", ip: ""} in network mk-addons-819803: {Iface:virbr1 ExpiryTime:2024-10-30 19:22:01 +0000 UTC Type:0 Mac:52:54:00:c8:a4:df Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-819803 Clientid:01:52:54:00:c8:a4:df}
	I1030 18:22:43.292368  389930 main.go:141] libmachine: (addons-819803) DBG | domain addons-819803 has defined IP address 192.168.39.211 and MAC address 52:54:00:c8:a4:df in network mk-addons-819803
	I1030 18:22:43.292563  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHPort
	I1030 18:22:43.292749  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHKeyPath
	I1030 18:22:43.292900  389930 main.go:141] libmachine: (addons-819803) Calling .GetSSHUsername
	I1030 18:22:43.293043  389930 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/addons-819803/id_rsa Username:docker}
	I1030 18:22:44.518725  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:44.580864  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.21326567s)
	I1030 18:22:44.580933  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.580948  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.580878  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.148631513s)
	I1030 18:22:44.580976  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.011496062s)
	I1030 18:22:44.581012  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581033  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581032  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.852909794s)
	I1030 18:22:44.581066  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581012  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581085  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581096  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581124  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.556043232s)
	W1030 18:22:44.581160  389930 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1030 18:22:44.581184  389930 retry.go:31] will retry after 340.663709ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1030 18:22:44.581191  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.460837207s)
	I1030 18:22:44.581225  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581238  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581364  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581371  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581384  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581394  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581404  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581411  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581420  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581428  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581487  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581528  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581550  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581565  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581577  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581583  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581586  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581592  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581600  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581606  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581611  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581852  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581876  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.581875  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581887  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.581895  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.581898  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.581903  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.581904  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583363  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.583397  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.583403  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583533  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.583544  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583555  389930 addons.go:475] Verifying addon metrics-server=true in "addons-819803"
	I1030 18:22:44.583594  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.583618  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.583624  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.583632  389930 addons.go:475] Verifying addon registry=true in "addons-819803"
	I1030 18:22:44.584036  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:44.584061  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.584072  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.584081  389930 addons.go:475] Verifying addon ingress=true in "addons-819803"
	I1030 18:22:44.585450  389930 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-819803 service yakd-dashboard -n yakd-dashboard
	
	I1030 18:22:44.586414  389930 out.go:177] * Verifying registry addon...
	I1030 18:22:44.586426  389930 out.go:177] * Verifying ingress addon...
	I1030 18:22:44.589069  389930 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1030 18:22:44.589144  389930 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1030 18:22:44.597152  389930 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1030 18:22:44.597170  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:44.598474  389930 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1030 18:22:44.598548  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:44.615580  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:44.615601  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:44.615905  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:44.615927  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:44.921991  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 18:22:45.114245  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:45.114680  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:45.163735  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.089050407s)
	I1030 18:22:45.163812  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:45.163835  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:45.163755  389930 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.874504711s)
	I1030 18:22:45.164138  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:45.164155  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:45.164165  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:45.164172  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:45.164443  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:45.164479  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:45.164491  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:45.164508  389930 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-819803"
	I1030 18:22:45.166224  389930 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1030 18:22:45.166230  389930 out.go:177] * Verifying csi-hostpath-driver addon...
	I1030 18:22:45.167861  389930 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1030 18:22:45.168498  389930 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1030 18:22:45.169270  389930 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1030 18:22:45.169287  389930 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1030 18:22:45.203200  389930 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1030 18:22:45.203228  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:45.214184  389930 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1030 18:22:45.214221  389930 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1030 18:22:45.244889  389930 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1030 18:22:45.244916  389930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1030 18:22:45.270741  389930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1030 18:22:45.597581  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:45.597645  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:45.672682  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:46.094186  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:46.095345  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:46.195098  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:46.284240  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.362174451s)
	I1030 18:22:46.284320  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.284339  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.284604  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.284624  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.284634  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.284642  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.284910  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:46.284957  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.284966  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.602691  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:46.602834  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:46.671342  389930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.400556193s)
	I1030 18:22:46.671404  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.671422  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.671740  389930 main.go:141] libmachine: (addons-819803) DBG | Closing plugin on server side
	I1030 18:22:46.671808  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.671826  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.671834  389930 main.go:141] libmachine: Making call to close driver server
	I1030 18:22:46.671845  389930 main.go:141] libmachine: (addons-819803) Calling .Close
	I1030 18:22:46.672099  389930 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:22:46.672116  389930 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:22:46.673028  389930 addons.go:475] Verifying addon gcp-auth=true in "addons-819803"
	I1030 18:22:46.674562  389930 out.go:177] * Verifying gcp-auth addon...
	I1030 18:22:46.676602  389930 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1030 18:22:46.697310  389930 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1030 18:22:46.697332  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:46.698624  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:46.973078  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:47.098020  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:47.098098  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:47.199593  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:47.200466  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:47.593521  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:47.593757  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:47.673750  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:47.681010  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:48.093405  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:48.093691  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:48.174901  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:48.179374  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:48.595041  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:48.595096  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:48.673532  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:48.679975  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:48.974841  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:49.094262  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:49.094992  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:49.173045  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:49.181121  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:49.597507  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:49.597602  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:49.673194  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:49.680081  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:50.094188  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:50.094680  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:50.172980  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:50.179849  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:50.594013  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:50.594618  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:50.672731  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:50.679741  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:51.095039  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:51.095526  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:51.172864  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:51.180106  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:51.473414  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:51.593705  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:51.594022  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:51.674027  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:51.680209  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:52.094236  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:52.095131  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:52.173677  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:52.179981  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:52.593985  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:52.594446  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:52.673335  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:52.679568  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:53.093716  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:53.095090  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:53.173239  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:53.179905  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:53.594023  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:53.594730  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:53.694328  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:53.694947  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:53.973465  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:54.094344  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:54.095162  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:54.173995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:54.180540  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:54.593546  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:54.594046  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:54.673610  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:54.679415  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:55.093651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:55.094909  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:55.173566  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:55.179242  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:55.594295  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:55.594676  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:55.673651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:55.678982  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:55.973664  389930 pod_ready.go:103] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"False"
	I1030 18:22:56.093445  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:56.093483  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:56.173533  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:56.180737  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:56.593340  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:56.593576  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:56.676764  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:56.681704  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:57.093672  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:57.094163  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:57.174648  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:57.180289  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:57.593437  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:57.593501  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:57.693387  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:57.694620  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:57.972725  389930 pod_ready.go:93] pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.972748  389930 pod_ready.go:82] duration metric: took 17.505602553s for pod "amd-gpu-device-plugin-sdqnr" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.972759  389930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.974286  389930 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-m6svs" not found
	I1030 18:22:57.974306  389930 pod_ready.go:82] duration metric: took 1.541544ms for pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace to be "Ready" ...
	E1030 18:22:57.974316  389930 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-m6svs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-m6svs" not found
	I1030 18:22:57.974322  389930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6bct" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.978255  389930 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6bct" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.978271  389930 pod_ready.go:82] duration metric: took 3.943929ms for pod "coredns-7c65d6cfc9-r6bct" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.978280  389930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.981937  389930 pod_ready.go:93] pod "etcd-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.981951  389930 pod_ready.go:82] duration metric: took 3.666223ms for pod "etcd-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.981964  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.986192  389930 pod_ready.go:93] pod "kube-apiserver-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:57.986209  389930 pod_ready.go:82] duration metric: took 4.239262ms for pod "kube-apiserver-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:57.986217  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.093895  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:58.094769  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:58.171398  389930 pod_ready.go:93] pod "kube-controller-manager-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:58.171422  389930 pod_ready.go:82] duration metric: took 185.199113ms for pod "kube-controller-manager-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.171436  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h64nt" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.173620  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:58.178990  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:58.571482  389930 pod_ready.go:93] pod "kube-proxy-h64nt" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:58.571506  389930 pod_ready.go:82] duration metric: took 400.064383ms for pod "kube-proxy-h64nt" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.571517  389930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.592738  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:58.593069  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:58.674050  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:58.679466  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:58.972198  389930 pod_ready.go:93] pod "kube-scheduler-addons-819803" in "kube-system" namespace has status "Ready":"True"
	I1030 18:22:58.972222  389930 pod_ready.go:82] duration metric: took 400.698693ms for pod "kube-scheduler-addons-819803" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:58.972236  389930 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace to be "Ready" ...
	I1030 18:22:59.093501  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:59.093937  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:59.172423  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:22:59.180556  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:59.594124  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:22:59.594585  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:22:59.695702  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:22:59.696372  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:00.093673  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:00.094093  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:00.173252  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:00.179707  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:00.593534  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:00.593828  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:00.673787  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:00.679058  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:00.978609  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:01.095181  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:01.095575  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:01.173367  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:01.180152  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:01.594323  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:01.594395  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:01.695847  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:01.697078  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:02.093975  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:02.094505  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:02.172803  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:02.179625  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:02.594454  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:02.594618  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:02.673194  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:02.680227  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:02.979206  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:03.095922  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:03.095931  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:03.173768  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:03.179891  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:03.594843  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:03.594993  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:03.676821  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:03.679765  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:04.093480  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:04.094441  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:04.174262  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:04.180169  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:04.595013  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:04.595100  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:04.674177  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:04.680717  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:05.094402  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:05.095041  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:05.176531  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:05.186931  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:05.478710  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:05.594673  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:05.594930  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:05.673122  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:05.679900  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:06.095226  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:06.095919  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:06.224367  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:06.225361  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:06.593285  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:06.593812  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:06.673002  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:06.679760  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:07.093233  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:07.094285  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:07.173113  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:07.179996  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:07.478831  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:07.594434  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:07.594900  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:07.672286  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:07.680041  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:08.193176  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:08.193533  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:08.194574  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:08.194671  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:08.594229  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:08.594244  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:08.672581  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:08.679202  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:09.094414  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:09.095185  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:09.173933  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:09.179490  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:09.480037  389930 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"False"
	I1030 18:23:09.594951  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:09.595291  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:09.695737  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:09.696847  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:10.093847  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:10.094166  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:10.173051  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:10.180135  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:10.479141  389930 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace has status "Ready":"True"
	I1030 18:23:10.479172  389930 pod_ready.go:82] duration metric: took 11.506928864s for pod "nvidia-device-plugin-daemonset-s2tw8" in "kube-system" namespace to be "Ready" ...
	I1030 18:23:10.479191  389930 pod_ready.go:39] duration metric: took 30.045153099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:23:10.479212  389930 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:23:10.479275  389930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:23:10.500897  389930 api_server.go:72] duration metric: took 35.066550493s to wait for apiserver process to appear ...
	I1030 18:23:10.500933  389930 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:23:10.500956  389930 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1030 18:23:10.505343  389930 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1030 18:23:10.506391  389930 api_server.go:141] control plane version: v1.31.2
	I1030 18:23:10.506419  389930 api_server.go:131] duration metric: took 5.478536ms to wait for apiserver health ...
	I1030 18:23:10.506429  389930 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:23:10.514344  389930 system_pods.go:59] 18 kube-system pods found
	I1030 18:23:10.514372  389930 system_pods.go:61] "amd-gpu-device-plugin-sdqnr" [087eef61-5115-41c9-aa53-29d2c8c23625] Running
	I1030 18:23:10.514378  389930 system_pods.go:61] "coredns-7c65d6cfc9-r6bct" [a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee] Running
	I1030 18:23:10.514384  389930 system_pods.go:61] "csi-hostpath-attacher-0" [603a5497-a36a-4123-ad83-8159ef7c6494] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 18:23:10.514390  389930 system_pods.go:61] "csi-hostpath-resizer-0" [042a6627-5f58-4a7c-8adc-393f4a23de62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1030 18:23:10.514398  389930 system_pods.go:61] "csi-hostpathplugin-vswkz" [122041b3-674e-42ec-a5a8-ec4a2f43cbdf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 18:23:10.514403  389930 system_pods.go:61] "etcd-addons-819803" [a155caea-481f-4200-8f06-77f2a36ed538] Running
	I1030 18:23:10.514407  389930 system_pods.go:61] "kube-apiserver-addons-819803" [c29acd73-ad14-4526-a8fa-53918e19264d] Running
	I1030 18:23:10.514412  389930 system_pods.go:61] "kube-controller-manager-addons-819803" [9a0525de-668d-41e1-91ba-16e3318e81e3] Running
	I1030 18:23:10.514416  389930 system_pods.go:61] "kube-ingress-dns-minikube" [a73fe2e4-a20e-4734-85d4-3da77152e4a1] Running
	I1030 18:23:10.514420  389930 system_pods.go:61] "kube-proxy-h64nt" [6f813bf3-f5de-4af3-87eb-4a429a334e7f] Running
	I1030 18:23:10.514425  389930 system_pods.go:61] "kube-scheduler-addons-819803" [3e0b4b8d-2392-4cc4-8c7d-b8a4f22749ca] Running
	I1030 18:23:10.514430  389930 system_pods.go:61] "metrics-server-84c5f94fbc-trqq2" [07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 18:23:10.514434  389930 system_pods.go:61] "nvidia-device-plugin-daemonset-s2tw8" [9aca0151-3bc1-4504-b8ba-0e3d70a68fba] Running
	I1030 18:23:10.514439  389930 system_pods.go:61] "registry-66c9cd494c-lwc9j" [ac1aec3e-8d69-4d98-875c-68c50389cf77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 18:23:10.514446  389930 system_pods.go:61] "registry-proxy-lhldq" [9edc008f-8004-45b8-a42f-897dcda09957] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 18:23:10.514453  389930 system_pods.go:61] "snapshot-controller-56fcc65765-4f2mt" [4ef57b7b-170b-4404-8af9-36d355a9be09] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.514458  389930 system_pods.go:61] "snapshot-controller-56fcc65765-k4fwb" [c0ffdb47-736c-4a9f-a9b6-d99bf84b26cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.514466  389930 system_pods.go:61] "storage-provisioner" [38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f] Running
	I1030 18:23:10.514471  389930 system_pods.go:74] duration metric: took 8.035436ms to wait for pod list to return data ...
	I1030 18:23:10.514479  389930 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:23:10.516936  389930 default_sa.go:45] found service account: "default"
	I1030 18:23:10.516955  389930 default_sa.go:55] duration metric: took 2.468748ms for default service account to be created ...
	I1030 18:23:10.516966  389930 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:23:10.524291  389930 system_pods.go:86] 18 kube-system pods found
	I1030 18:23:10.524315  389930 system_pods.go:89] "amd-gpu-device-plugin-sdqnr" [087eef61-5115-41c9-aa53-29d2c8c23625] Running
	I1030 18:23:10.524321  389930 system_pods.go:89] "coredns-7c65d6cfc9-r6bct" [a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee] Running
	I1030 18:23:10.524328  389930 system_pods.go:89] "csi-hostpath-attacher-0" [603a5497-a36a-4123-ad83-8159ef7c6494] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 18:23:10.524335  389930 system_pods.go:89] "csi-hostpath-resizer-0" [042a6627-5f58-4a7c-8adc-393f4a23de62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1030 18:23:10.524342  389930 system_pods.go:89] "csi-hostpathplugin-vswkz" [122041b3-674e-42ec-a5a8-ec4a2f43cbdf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 18:23:10.524348  389930 system_pods.go:89] "etcd-addons-819803" [a155caea-481f-4200-8f06-77f2a36ed538] Running
	I1030 18:23:10.524355  389930 system_pods.go:89] "kube-apiserver-addons-819803" [c29acd73-ad14-4526-a8fa-53918e19264d] Running
	I1030 18:23:10.524358  389930 system_pods.go:89] "kube-controller-manager-addons-819803" [9a0525de-668d-41e1-91ba-16e3318e81e3] Running
	I1030 18:23:10.524365  389930 system_pods.go:89] "kube-ingress-dns-minikube" [a73fe2e4-a20e-4734-85d4-3da77152e4a1] Running
	I1030 18:23:10.524368  389930 system_pods.go:89] "kube-proxy-h64nt" [6f813bf3-f5de-4af3-87eb-4a429a334e7f] Running
	I1030 18:23:10.524374  389930 system_pods.go:89] "kube-scheduler-addons-819803" [3e0b4b8d-2392-4cc4-8c7d-b8a4f22749ca] Running
	I1030 18:23:10.524379  389930 system_pods.go:89] "metrics-server-84c5f94fbc-trqq2" [07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 18:23:10.524386  389930 system_pods.go:89] "nvidia-device-plugin-daemonset-s2tw8" [9aca0151-3bc1-4504-b8ba-0e3d70a68fba] Running
	I1030 18:23:10.524391  389930 system_pods.go:89] "registry-66c9cd494c-lwc9j" [ac1aec3e-8d69-4d98-875c-68c50389cf77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 18:23:10.524395  389930 system_pods.go:89] "registry-proxy-lhldq" [9edc008f-8004-45b8-a42f-897dcda09957] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 18:23:10.524404  389930 system_pods.go:89] "snapshot-controller-56fcc65765-4f2mt" [4ef57b7b-170b-4404-8af9-36d355a9be09] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.524412  389930 system_pods.go:89] "snapshot-controller-56fcc65765-k4fwb" [c0ffdb47-736c-4a9f-a9b6-d99bf84b26cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 18:23:10.524416  389930 system_pods.go:89] "storage-provisioner" [38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f] Running
	I1030 18:23:10.524422  389930 system_pods.go:126] duration metric: took 7.450347ms to wait for k8s-apps to be running ...
	I1030 18:23:10.524430  389930 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:23:10.524471  389930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:23:10.539221  389930 system_svc.go:56] duration metric: took 14.783961ms WaitForService to wait for kubelet
	I1030 18:23:10.539245  389930 kubeadm.go:582] duration metric: took 35.104907783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:23:10.539264  389930 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:23:10.542297  389930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:23:10.542317  389930 node_conditions.go:123] node cpu capacity is 2
	I1030 18:23:10.542330  389930 node_conditions.go:105] duration metric: took 3.061438ms to run NodePressure ...
	I1030 18:23:10.542341  389930 start.go:241] waiting for startup goroutines ...
	I1030 18:23:10.593962  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:10.594337  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:10.673558  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:10.680530  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:11.093537  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:11.094028  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:11.173642  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:11.179449  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:11.593810  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:11.594016  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:11.673368  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:11.680170  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:12.093882  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:12.094051  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:12.173092  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:12.180291  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:12.593875  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:12.594188  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:12.674239  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:12.680127  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:13.093184  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:13.093962  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:13.173953  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:13.179889  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:13.593935  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:13.594480  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:13.674127  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:13.680152  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:14.093083  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:14.093521  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:14.173074  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:14.179979  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:14.594010  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:14.594615  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:14.673022  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:14.679841  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:15.094557  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:15.094790  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:15.173111  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:15.180044  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:15.592703  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:15.593241  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:15.673011  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:15.679543  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:16.093078  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:16.094033  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:16.174034  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:16.180058  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:16.595014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:16.595425  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:16.673998  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:16.680962  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:17.093712  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:17.094511  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:17.173552  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:17.180520  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:18.099441  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:18.099510  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:18.099888  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:18.099960  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:18.106021  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:18.110275  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:18.173316  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:18.183712  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:18.594754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:18.595417  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:18.673524  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:18.680456  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:19.094718  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:19.095083  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:19.173692  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:19.179664  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:19.594307  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:19.594686  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:19.673086  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:19.679610  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:20.094697  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:20.094978  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:20.174218  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:20.179700  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:20.593387  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:20.593879  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:20.673194  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:20.679899  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:21.093960  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:21.094078  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:21.173014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:21.179884  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:21.593870  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:21.594257  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:21.672694  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:21.679347  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:22.094706  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:22.094796  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:22.173211  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:22.179903  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:22.594472  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:22.594806  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:22.673896  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:22.679722  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:23.094632  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:23.094700  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:23.173935  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:23.181858  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:23.594146  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:23.594293  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:23.673429  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:23.685058  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:24.094680  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:24.094690  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:24.174012  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:24.179934  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:24.594408  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:24.595024  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:24.673238  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:24.680203  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:25.093348  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:25.094470  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:25.173322  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:25.181179  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:25.594115  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:25.594938  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:25.673583  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:25.679422  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:26.094560  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:26.094673  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:26.173810  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:26.180276  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:26.593811  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:26.594073  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:26.676332  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:26.680034  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:27.093005  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:27.093065  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:27.174815  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:27.179262  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:27.593547  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:27.593968  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:27.674025  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:27.679142  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:28.093957  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:28.094060  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:28.172651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:28.179317  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:28.593503  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:28.594792  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:28.673412  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:28.680237  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:29.093876  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:29.094309  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:29.173178  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:29.179854  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:29.594521  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:29.594631  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:29.673591  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:29.680051  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:30.093537  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:30.094642  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:30.173318  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:30.180489  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:30.596012  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:30.598053  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:30.673804  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:30.679650  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:31.093925  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 18:23:31.094284  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:31.172722  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:31.179406  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:31.594413  389930 kapi.go:107] duration metric: took 47.005339132s to wait for kubernetes.io/minikube-addons=registry ...
	I1030 18:23:31.594473  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:31.673224  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:31.680242  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:32.093337  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:32.194782  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:32.195545  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:32.594498  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:32.673210  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:32.680467  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:33.095585  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:33.175791  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:33.181132  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:33.593373  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:33.693502  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:33.694815  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:34.093428  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:34.173603  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:34.179147  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:34.593421  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:34.673068  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:34.679760  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:35.093607  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:35.173378  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:35.180673  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:35.594407  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:35.672786  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:35.679448  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:36.093916  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:36.173812  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:36.179204  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:36.593558  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:36.673865  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:36.679993  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:37.094100  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:37.173398  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:37.180731  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:37.593635  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:37.673661  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:37.679363  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:38.093449  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:38.172889  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:38.180199  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:38.593670  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:38.673629  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:38.679494  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:39.093793  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:39.173529  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:39.179204  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:39.594358  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:39.673159  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:39.679968  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:40.094909  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:40.173170  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:40.180006  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:40.594746  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:40.673068  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:40.680448  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:41.093633  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:41.173200  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:41.180095  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:41.594547  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:41.673348  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:41.679788  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:42.094533  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:42.173027  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:42.179272  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:42.593664  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:42.673537  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:42.679754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:43.094375  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:43.173615  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:43.180138  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:43.593449  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:43.673179  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:43.680290  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:44.093485  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:44.173511  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:44.180181  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:44.593750  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:44.675005  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:44.679600  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:45.093955  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:45.173399  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:45.180056  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:45.594095  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:45.673746  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:45.680137  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:46.092974  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:46.173700  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:46.179007  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:46.594855  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:46.673615  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:46.679637  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:47.094229  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:47.172665  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:47.179213  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:47.593129  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:47.673754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:47.679316  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:48.093801  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:48.173501  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:48.178955  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:48.594585  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:48.673118  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:48.679815  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:49.094235  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:49.172965  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:49.179493  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:49.593729  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:49.673385  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:49.680777  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:50.094291  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:50.177722  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:50.179989  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:50.593948  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:50.694543  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:50.694794  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:51.093603  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:51.173472  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:51.180912  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:51.593913  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:51.673595  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:51.679164  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:52.093259  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:52.173043  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:52.179784  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:52.594570  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:52.673014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:52.679950  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:53.094306  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:53.173035  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:53.179606  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:53.594125  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:53.673506  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:53.680061  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:54.094386  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:54.173381  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:54.180231  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:54.593300  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:54.672567  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:54.679256  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:55.093827  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:55.173733  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:55.179275  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:55.595288  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:55.674206  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:55.679456  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:56.093919  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:56.173328  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:56.180299  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:56.593461  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:56.672852  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:56.679756  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:57.093681  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:57.173620  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:57.179424  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:57.593912  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:57.673333  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:57.680332  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:58.093818  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:58.173394  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:58.180060  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:58.594281  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:58.672892  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:58.680303  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:59.093698  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:59.173288  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:59.179786  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:23:59.594131  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:23:59.673768  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:23:59.679758  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:00.094467  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:00.173272  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:00.179894  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:00.594040  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:00.673583  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:00.679341  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:01.094018  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:01.173467  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:01.180650  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:01.593821  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:01.673365  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:01.680011  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:02.094158  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:02.174149  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:02.180153  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:02.595432  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:02.695573  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:02.696244  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:03.094451  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:03.174737  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:03.179451  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:03.593866  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:03.674054  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:03.679827  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:04.094115  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:04.173478  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:04.180198  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:04.594090  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:04.673593  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:04.679250  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:05.094353  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:05.172803  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:05.179943  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:05.594096  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:05.673750  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:05.679746  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:06.093972  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:06.173843  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:06.179529  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:06.615729  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:06.673604  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:06.680702  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:07.093812  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:07.173599  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:07.179080  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:07.593404  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:07.673624  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:07.679350  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:08.093326  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:08.173387  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:08.179579  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:08.593688  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:08.673427  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:08.680200  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:09.093636  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:09.173487  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:09.180265  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:09.595037  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:09.673727  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:09.679754  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:10.094006  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:10.173570  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:10.179688  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:10.597969  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:10.676156  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:10.679352  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:11.094048  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:11.173259  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:11.180268  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:11.594513  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:11.673567  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:11.679065  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:12.094203  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:12.172468  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:12.180598  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:12.593626  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:12.673184  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:12.680109  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:13.094536  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:13.173177  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:13.179982  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:13.600465  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:13.674835  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:13.679519  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:14.094296  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:14.173514  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:14.180031  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:14.594409  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:14.674172  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:14.680001  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:15.094774  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:15.173547  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:15.179313  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:15.594436  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:15.677184  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:15.679534  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:16.094057  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:16.174052  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:16.179495  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:16.593755  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:16.673865  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:16.687367  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:17.093528  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:17.173218  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:17.180624  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:17.593975  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:17.673784  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:17.679472  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:18.093220  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:18.173453  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:18.180027  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:18.593267  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:18.685582  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:18.688347  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:19.093356  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:19.173107  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:19.180973  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:19.594839  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:19.673958  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:19.680651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:20.093927  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:20.173540  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:20.180235  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:20.594184  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:20.673106  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:20.679587  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:21.272035  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:21.272988  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:21.273121  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:21.594877  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:21.673161  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:21.679202  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:22.093995  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:22.192896  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:22.193837  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:22.593187  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:22.672429  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:22.680247  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:23.093955  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:23.173117  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:23.179885  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:23.594077  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:23.673424  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:23.680603  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:24.093990  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:24.195378  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:24.195749  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:24.592999  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:24.673558  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:24.680121  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:25.094313  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:25.173515  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:25.179235  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:25.594548  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:25.673169  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:25.679942  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:26.095069  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:26.173651  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:26.179433  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:26.593173  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:26.674557  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:26.680365  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:27.093953  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:27.194133  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:27.194966  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:27.594461  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:27.672907  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:27.680225  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:28.093262  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:28.172549  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:28.179213  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:28.593588  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:28.673421  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:28.680745  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:29.094332  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:29.195356  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:29.196302  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:29.595102  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:29.673704  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:29.680491  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:30.093968  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:30.173164  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:30.180253  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:30.593982  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:30.673433  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:30.679530  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:31.097174  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:31.194294  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:31.195636  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:31.593982  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:31.694498  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:31.695342  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:32.096092  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:32.180338  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:32.258079  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:32.594665  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:32.673118  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:32.680749  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:33.094653  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:33.173187  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:33.180235  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:33.593028  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:33.673849  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:33.679745  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:34.093717  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:34.193799  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:34.195148  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:34.599179  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:34.696446  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:34.697679  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:35.097007  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:35.196951  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:35.198118  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:35.594192  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:35.673067  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:35.680274  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:36.093565  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:36.173639  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:36.179881  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:36.881593  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:36.881990  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:36.882423  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:37.098958  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:37.195958  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:37.196949  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:37.595421  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:37.674497  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:37.680037  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:38.094458  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:38.173304  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:38.179739  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:38.594120  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:38.676073  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:38.679864  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:39.094825  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:39.194139  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:39.195523  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 18:24:39.595192  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:39.672897  389930 kapi.go:107] duration metric: took 1m54.504397359s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1030 18:24:39.679388  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:40.094319  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:40.179995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:40.594718  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:40.680403  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:41.095668  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:41.180189  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:41.594599  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:41.679963  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:42.094589  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:42.180100  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:42.593185  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:42.680714  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:43.094711  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:43.180978  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:43.594763  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:43.680837  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:44.094677  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:44.181181  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:44.593310  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:44.681202  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:45.093951  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:45.180476  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:45.594081  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:45.680811  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:46.094359  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:46.180579  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:46.593875  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:46.680975  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:47.094454  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:47.179798  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:47.594397  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:47.680406  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:48.093429  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:48.180873  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:48.594804  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:48.680699  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:49.094275  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:49.194240  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:49.593968  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:49.680637  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:50.095297  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:50.180995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:50.593739  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:50.680477  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:51.093929  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:51.180674  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:51.593761  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:51.680046  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:52.093438  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:52.180854  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:52.594874  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:52.680910  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:53.094637  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:53.179699  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:53.593709  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:53.680243  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:54.093878  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:54.193344  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:54.594089  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:54.680748  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:55.094444  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:55.180010  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:55.594584  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:55.680697  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:56.094356  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:56.180240  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:56.594302  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:56.680834  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:57.094655  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:57.180457  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:57.593983  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:57.681035  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:58.094540  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:58.180966  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:58.594113  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:58.680890  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:59.094284  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:59.193734  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:24:59.594460  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:24:59.680225  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:00.096127  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:00.180877  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:00.594146  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:00.680164  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:01.093394  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:01.181049  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:01.594128  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:01.680709  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:02.094832  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:02.180069  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:02.593917  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:02.685615  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:03.095000  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:03.180249  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:03.593404  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:03.680224  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:04.124310  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:04.223096  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:04.594530  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:04.680995  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:05.096377  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:05.195014  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:05.594615  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:05.682827  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:06.094286  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:06.180987  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:06.594203  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:06.681056  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:07.093925  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:07.181643  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:07.594263  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:07.680837  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:08.094185  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:08.180684  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:08.594282  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:08.681430  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:09.094048  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:09.180799  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:09.593969  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:09.680179  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:10.093325  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:10.193770  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:10.593981  389930 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 18:25:10.680548  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:11.094538  389930 kapi.go:107] duration metric: took 2m26.505393433s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1030 18:25:11.194099  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:11.680740  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:12.180310  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:12.680788  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:13.181506  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:13.680790  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:14.180667  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:14.681033  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:15.192632  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:15.680321  389930 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 18:25:16.181575  389930 kapi.go:107] duration metric: took 2m29.50496643s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1030 18:25:16.183393  389930 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-819803 cluster.
	I1030 18:25:16.184880  389930 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1030 18:25:16.186226  389930 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1030 18:25:16.187921  389930 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1030 18:25:16.189134  389930 addons.go:510] duration metric: took 2m40.754794981s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher inspektor-gadget storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1030 18:25:16.189188  389930 start.go:246] waiting for cluster config update ...
	I1030 18:25:16.189208  389930 start.go:255] writing updated cluster config ...
	I1030 18:25:16.189476  389930 ssh_runner.go:195] Run: rm -f paused
	I1030 18:25:16.241409  389930 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 18:25:16.243157  389930 out.go:177] * Done! kubectl is now configured to use "addons-819803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.792923642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313102792854654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42b9d852-42b4-412b-a0c7-e48382829dc8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.793546301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=261918c4-0886-4def-a974-8fad06ce3a65 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.793597199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=261918c4-0886-4def-a974-8fad06ce3a65 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.793863198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ecf84da401b69352f2fbdba9f527bd64c5c4f1bbc91a78c9e9da334acc2898b,PodSandboxId:20740f6d93f54f8438244936b7a2473d3009679b2036672f630e2cab7e143bc0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730312905313891296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srpgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0d95ce8-668d-4d45-a042-299981601dff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-9
3cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2acd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=261918c4-0886-4def-a974-8fad06ce3a65 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.832476547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=095ebdec-52f7-4894-9a42-d9d8388bf661 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.832565369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=095ebdec-52f7-4894-9a42-d9d8388bf661 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.833478822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7cc60de-aed7-4bdc-9bdf-76f05a90b438 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.834698277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313102834673770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7cc60de-aed7-4bdc-9bdf-76f05a90b438 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.835693089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3933bb52-3bc5-4d31-9f6a-4b50888e74be name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.835819384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3933bb52-3bc5-4d31-9f6a-4b50888e74be name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.836408174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ecf84da401b69352f2fbdba9f527bd64c5c4f1bbc91a78c9e9da334acc2898b,PodSandboxId:20740f6d93f54f8438244936b7a2473d3009679b2036672f630e2cab7e143bc0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730312905313891296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srpgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0d95ce8-668d-4d45-a042-299981601dff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-9
3cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2acd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3933bb52-3bc5-4d31-9f6a-4b50888e74be name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.877032779Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c37ca864-a177-4d6e-819a-c15c539a6d1c name=/runtime.v1.RuntimeService/Version
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.877125939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c37ca864-a177-4d6e-819a-c15c539a6d1c name=/runtime.v1.RuntimeService/Version
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.878454010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6942aac4-dd1f-457d-986a-9e66c3f4bd7c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.879633948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313102879607388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6942aac4-dd1f-457d-986a-9e66c3f4bd7c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.880316442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1433ad8b-df5c-4cdc-aaf5-d5aec7b4e9c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.880451473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1433ad8b-df5c-4cdc-aaf5-d5aec7b4e9c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.880724048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ecf84da401b69352f2fbdba9f527bd64c5c4f1bbc91a78c9e9da334acc2898b,PodSandboxId:20740f6d93f54f8438244936b7a2473d3009679b2036672f630e2cab7e143bc0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730312905313891296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srpgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0d95ce8-668d-4d45-a042-299981601dff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-9
3cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2acd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1433ad8b-df5c-4cdc-aaf5-d5aec7b4e9c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.914229350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7339d228-f296-4041-a140-6dd59bbb6242 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.914343998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7339d228-f296-4041-a140-6dd59bbb6242 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.916362308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f488431-9bf3-4a71-84eb-397dda0bfa5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.917654207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313102917515454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f488431-9bf3-4a71-84eb-397dda0bfa5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.918522231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5e97688-5019-4b10-98de-d6fa56bba58b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.918598309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5e97688-5019-4b10-98de-d6fa56bba58b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:31:42 addons-819803 crio[666]: time="2024-10-30 18:31:42.918878878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ecf84da401b69352f2fbdba9f527bd64c5c4f1bbc91a78c9e9da334acc2898b,PodSandboxId:20740f6d93f54f8438244936b7a2473d3009679b2036672f630e2cab7e143bc0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730312905313891296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srpgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0d95ce8-668d-4d45-a042-299981601dff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8323c4d9fbaf0c66571632dba8d4881c87cfefd2ec57ca7062f064f81e7c5893,PodSandboxId:dd5e3d5f78cee776401f36c28bfe601ceed4bd2dddaa8d81f49644a6d19a27e4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730312762268600451,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 591e33e1-09d2-4f5f-a6aa-40bfd9a7ced3,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b8c4da959667d294cba6436afb554b2174d4cc88063b285154eb5aa317466c,PodSandboxId:d6196190a1f0ed3cb0af52236e10c7ac2a9cd7fe17c789fc09c79df4406ed58a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730312724638127727,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b594df1b-adba-4e23-9
3cc-29d66c8cf9f1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e337d053d97a92d039405648417e9d4381847bcaf89593fd3271116bd08fa96c,PodSandboxId:634b9431071e538393779d3162cb51cac0c34513572937eeaab2ab49f637d1c3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730312612750520239,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-trqq2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 07b3ba0b-ec2c-4a8f-83d7-318abeb6f80a,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a923e1439650545e8eba44270ae4fc134b8069b8213e6fec6099edb1af4914b4,PodSandboxId:6e3d73e7493a957e845bbbec6311138324ff04d85de9bd900eff6182edd40c79,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730312577398411575,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sdqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087eef61-5115-41c9-aa53-29d2c8c23625,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f46b0ca80854c9d1a66f9fca2789c69b6e2acd673897114ae279a484bcf1a86,PodSandboxId:c0244d7740a711982ed21498d613463d98fdd1ad118cd09cf6e5666e490c2b1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730312562269541651,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cd898a-c7f7-48d4-ac94-2b6cd4fd4b5f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d97727e215999fa8,PodSandboxId:b5c24c6a457fb9a3566a63c75a2961635db9ce7e1511b05d82052e217c3565e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730312559683223270,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6bct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1dd60ad-70bb-40a7-8dfb-d6b9f0ab48ee,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53,PodSandboxId:95ba2e67708c3bfd5dba3bf078266b1a35696ba0ac9f1695aaa20c561b078bcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730312556523475889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h64nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f813bf3-f5de-4af3-87eb-4a429a334e7f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb,PodSandboxId:3432d90eee1bf775fe75447be1dcfdcf466dd37469590155fc8cc5a0ac612c62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730312545004348988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e30a9531527f91cac7d80543a7c67b8b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac,PodSandboxId:638afc3dd036198e27a7af48810f35f6d215db95594ed6c687678d371c16be0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730312544993211192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51bdee3c48682a1d27375eee86f91f4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271,PodSandboxId:4c3f08b7bad80293feeec80a225c62a7fea5a8d1d060f4ed0ce52f84d2d50627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730312544962298197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59aa5f01b68c2947293545aebe8e4550,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b,PodSandboxId:51945c273e8d3f5c6cc9d342f266df358e56642fe17537ba73b84ad754de188a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730312544924884918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00f20a18d20e63cdeb94703c7aefb4a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5e97688-5019-4b10-98de-d6fa56bba58b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4ecf84da401b6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   20740f6d93f54       hello-world-app-55bf9c44b4-srpgj
	8323c4d9fbaf0       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   dd5e3d5f78cee       nginx
	23b8c4da95966       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   d6196190a1f0e       busybox
	e337d053d97a9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   8 minutes ago       Running             metrics-server            0                   634b9431071e5       metrics-server-84c5f94fbc-trqq2
	a923e14396505       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                8 minutes ago       Running             amd-gpu-device-plugin     0                   6e3d73e7493a9       amd-gpu-device-plugin-sdqnr
	1f46b0ca80854       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        9 minutes ago       Running             storage-provisioner       0                   c0244d7740a71       storage-provisioner
	d67f4b1b6f0d2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        9 minutes ago       Running             coredns                   0                   b5c24c6a457fb       coredns-7c65d6cfc9-r6bct
	8aabc7e519d19       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        9 minutes ago       Running             kube-proxy                0                   95ba2e67708c3       kube-proxy-h64nt
	e70c279e30dcc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        9 minutes ago       Running             etcd                      0                   3432d90eee1bf       etcd-addons-819803
	430e9b4f16ec1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        9 minutes ago       Running             kube-scheduler            0                   638afc3dd0361       kube-scheduler-addons-819803
	3d74745cb9482       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        9 minutes ago       Running             kube-controller-manager   0                   4c3f08b7bad80       kube-controller-manager-addons-819803
	805059b66c577       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        9 minutes ago       Running             kube-apiserver            0                   51945c273e8d3       kube-apiserver-addons-819803
	
	
	==> coredns [d67f4b1b6f0d22044a227762b0d881ed07ca311ea1af4818d97727e215999fa8] <==
	[INFO] 10.244.0.22:37165 - 26433 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062456s
	[INFO] 10.244.0.22:37165 - 43494 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059067s
	[INFO] 10.244.0.22:37165 - 12749 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054865s
	[INFO] 10.244.0.22:37165 - 11761 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000099337s
	[INFO] 10.244.0.22:33850 - 28416 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099476s
	[INFO] 10.244.0.22:33850 - 23071 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075704s
	[INFO] 10.244.0.22:33850 - 44457 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000103469s
	[INFO] 10.244.0.22:33850 - 52879 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099317s
	[INFO] 10.244.0.22:33850 - 592 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058851s
	[INFO] 10.244.0.22:33850 - 45155 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063656s
	[INFO] 10.244.0.22:33850 - 62677 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000105414s
	[INFO] 10.244.0.22:36698 - 49804 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000107461s
	[INFO] 10.244.0.22:36698 - 4962 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039282s
	[INFO] 10.244.0.22:36698 - 17855 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035834s
	[INFO] 10.244.0.22:36698 - 23669 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032192s
	[INFO] 10.244.0.22:36698 - 38717 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028057s
	[INFO] 10.244.0.22:36698 - 3148 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034148s
	[INFO] 10.244.0.22:36698 - 60888 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030056s
	[INFO] 10.244.0.22:33403 - 16716 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000079384s
	[INFO] 10.244.0.22:33403 - 7630 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000187287s
	[INFO] 10.244.0.22:33403 - 22907 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000776599s
	[INFO] 10.244.0.22:33403 - 61014 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072185s
	[INFO] 10.244.0.22:33403 - 48258 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000091008s
	[INFO] 10.244.0.22:33403 - 28184 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000487907s
	[INFO] 10.244.0.22:33403 - 28084 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000193173s
	
	
	==> describe nodes <==
	Name:               addons-819803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-819803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=addons-819803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T18_22_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-819803
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:22:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-819803
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:28:37 +0000   Wed, 30 Oct 2024 18:22:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:28:37 +0000   Wed, 30 Oct 2024 18:22:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:28:37 +0000   Wed, 30 Oct 2024 18:22:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:28:37 +0000   Wed, 30 Oct 2024 18:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    addons-819803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 3384241bc3144ca39ea65062097c3a72
	  System UUID:                3384241b-c314-4ca3-9ea6-5062097c3a72
	  Boot ID:                    e76ddacb-724b-468c-9414-b0b4a3bd3a72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  default                     hello-world-app-55bf9c44b4-srpgj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 amd-gpu-device-plugin-sdqnr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 coredns-7c65d6cfc9-r6bct                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m8s
	  kube-system                 etcd-addons-819803                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m14s
	  kube-system                 kube-apiserver-addons-819803             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-addons-819803    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-h64nt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-addons-819803             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-84c5f94fbc-trqq2          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         9m3s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s (x8 over 9m19s)  kubelet          Node addons-819803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x8 over 9m19s)  kubelet          Node addons-819803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x7 over 9m19s)  kubelet          Node addons-819803 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s                  kubelet          Node addons-819803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s                  kubelet          Node addons-819803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s                  kubelet          Node addons-819803 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m12s                  kubelet          Node addons-819803 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node addons-819803 event: Registered Node addons-819803 in Controller
	
	
	==> dmesg <==
	[  +5.054874] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.020576] kauditd_printk_skb: 151 callbacks suppressed
	[  +7.562173] kauditd_printk_skb: 68 callbacks suppressed
	[Oct30 18:23] kauditd_printk_skb: 2 callbacks suppressed
	[Oct30 18:24] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.072000] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.013814] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.006199] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.634532] kauditd_printk_skb: 43 callbacks suppressed
	[Oct30 18:25] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.654005] kauditd_printk_skb: 9 callbacks suppressed
	[ +18.771720] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.535362] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.312718] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.474003] kauditd_printk_skb: 20 callbacks suppressed
	[Oct30 18:26] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.908246] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.193428] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.279822] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.724164] kauditd_printk_skb: 6 callbacks suppressed
	[ +14.848615] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.862857] kauditd_printk_skb: 7 callbacks suppressed
	[Oct30 18:27] kauditd_printk_skb: 49 callbacks suppressed
	[Oct30 18:28] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.069059] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [e70c279e30dccf88f1158aa09582f2908e24606a3cd26e75ebda3f31afe708cb] <==
	{"level":"info","ts":"2024-10-30T18:24:36.862254Z","caller":"traceutil/trace.go:171","msg":"trace[2114230265] linearizableReadLoop","detail":"{readStateIndex:1201; appliedIndex:1200; }","duration":"285.484408ms","start":"2024-10-30T18:24:36.576757Z","end":"2024-10-30T18:24:36.862241Z","steps":["trace[2114230265] 'read index received'  (duration: 285.287877ms)","trace[2114230265] 'applied index is now lower than readState.Index'  (duration: 194.427µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-30T18:24:36.862616Z","caller":"traceutil/trace.go:171","msg":"trace[685996133] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"411.616884ms","start":"2024-10-30T18:24:36.450988Z","end":"2024-10-30T18:24:36.862605Z","steps":["trace[685996133] 'process raft request'  (duration: 411.097015ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:24:36.862744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.881505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:24:36.862786Z","caller":"traceutil/trace.go:171","msg":"trace[666217286] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"206.919387ms","start":"2024-10-30T18:24:36.655858Z","end":"2024-10-30T18:24:36.862778Z","steps":["trace[666217286] 'agreement among raft nodes before linearized reading'  (duration: 206.872113ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:24:36.862862Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T18:24:36.450974Z","time spent":"411.755285ms","remote":"127.0.0.1:52196","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":844,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-s2tw8.18034e1ab96fa504\" mod_revision:928 > success:<request_put:<key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-s2tw8.18034e1ab96fa504\" value_size:744 lease:2079980087949946493 >> failure:<request_range:<key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-s2tw8.18034e1ab96fa504\" > >"}
	{"level":"warn","ts":"2024-10-30T18:24:36.862924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.192764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:24:36.862960Z","caller":"traceutil/trace.go:171","msg":"trace[13747596] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"199.227938ms","start":"2024-10-30T18:24:36.663726Z","end":"2024-10-30T18:24:36.862954Z","steps":["trace[13747596] 'agreement among raft nodes before linearized reading'  (duration: 199.183913ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:24:36.862698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.915829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:24:36.863081Z","caller":"traceutil/trace.go:171","msg":"trace[916201317] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1157; }","duration":"286.321038ms","start":"2024-10-30T18:24:36.576753Z","end":"2024-10-30T18:24:36.863074Z","steps":["trace[916201317] 'agreement among raft nodes before linearized reading'  (duration: 285.784948ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:25:14.126413Z","caller":"traceutil/trace.go:171","msg":"trace[1573345284] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"122.488475ms","start":"2024-10-30T18:25:14.003898Z","end":"2024-10-30T18:25:14.126387Z","steps":["trace[1573345284] 'process raft request'  (duration: 122.401753ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:25:49.128483Z","caller":"traceutil/trace.go:171","msg":"trace[1088917934] transaction","detail":"{read_only:false; response_revision:1416; number_of_response:1; }","duration":"254.328792ms","start":"2024-10-30T18:25:48.874129Z","end":"2024-10-30T18:25:49.128458Z","steps":["trace[1088917934] 'process raft request'  (duration: 254.240768ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:26:25.545344Z","caller":"traceutil/trace.go:171","msg":"trace[1172046373] transaction","detail":"{read_only:false; response_revision:1651; number_of_response:1; }","duration":"343.457009ms","start":"2024-10-30T18:26:25.201863Z","end":"2024-10-30T18:26:25.545320Z","steps":["trace[1172046373] 'process raft request'  (duration: 343.328222ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:25.545686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T18:26:25.201848Z","time spent":"343.672014ms","remote":"127.0.0.1:52384","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1645 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-30T18:26:25.546088Z","caller":"traceutil/trace.go:171","msg":"trace[1067557042] linearizableReadLoop","detail":"{readStateIndex:1725; appliedIndex:1725; }","duration":"198.691834ms","start":"2024-10-30T18:26:25.347378Z","end":"2024-10-30T18:26:25.546069Z","steps":["trace[1067557042] 'read index received'  (duration: 198.689653ms)","trace[1067557042] 'applied index is now lower than readState.Index'  (duration: 1.712µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-30T18:26:25.547301Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.909199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:26:25.547409Z","caller":"traceutil/trace.go:171","msg":"trace[971830471] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1651; }","duration":"200.020443ms","start":"2024-10-30T18:26:25.347374Z","end":"2024-10-30T18:26:25.547394Z","steps":["trace[971830471] 'agreement among raft nodes before linearized reading'  (duration: 199.881341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:25.553031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.999427ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:26:25.553084Z","caller":"traceutil/trace.go:171","msg":"trace[634688648] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1652; }","duration":"145.063243ms","start":"2024-10-30T18:26:25.408012Z","end":"2024-10-30T18:26:25.553075Z","steps":["trace[634688648] 'agreement among raft nodes before linearized reading'  (duration: 144.971173ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:26:25.553408Z","caller":"traceutil/trace.go:171","msg":"trace[1537023752] transaction","detail":"{read_only:false; response_revision:1652; number_of_response:1; }","duration":"110.018954ms","start":"2024-10-30T18:26:25.443379Z","end":"2024-10-30T18:26:25.553398Z","steps":["trace[1537023752] 'process raft request'  (duration: 109.519049ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T18:26:56.202570Z","caller":"traceutil/trace.go:171","msg":"trace[1987309311] linearizableReadLoop","detail":"{readStateIndex:1874; appliedIndex:1873; }","duration":"218.528743ms","start":"2024-10-30T18:26:55.984020Z","end":"2024-10-30T18:26:56.202548Z","steps":["trace[1987309311] 'read index received'  (duration: 215.098744ms)","trace[1987309311] 'applied index is now lower than readState.Index'  (duration: 3.42894ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-30T18:26:56.202775Z","caller":"traceutil/trace.go:171","msg":"trace[1355177454] transaction","detail":"{read_only:false; response_revision:1792; number_of_response:1; }","duration":"257.231128ms","start":"2024-10-30T18:26:55.945535Z","end":"2024-10-30T18:26:56.202766Z","steps":["trace[1355177454] 'process raft request'  (duration: 253.67595ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:56.202955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.926139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T18:26:56.202993Z","caller":"traceutil/trace.go:171","msg":"trace[1030047382] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1792; }","duration":"218.994955ms","start":"2024-10-30T18:26:55.983992Z","end":"2024-10-30T18:26:56.202987Z","steps":["trace[1030047382] 'agreement among raft nodes before linearized reading'  (duration: 218.912831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T18:26:56.203190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.334501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" ","response":"range_response_count:1 size:1698"}
	{"level":"info","ts":"2024-10-30T18:26:56.203227Z","caller":"traceutil/trace.go:171","msg":"trace[231693263] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1792; }","duration":"126.429466ms","start":"2024-10-30T18:26:56.076792Z","end":"2024-10-30T18:26:56.203221Z","steps":["trace[231693263] 'agreement among raft nodes before linearized reading'  (duration: 126.281409ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:31:43 up 9 min,  0 users,  load average: 0.56, 0.75, 0.59
	Linux addons-819803 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [805059b66c5778a497faa6df32264b173a98ee6caeb96db32e0b293cab94ae3b] <==
	I1030 18:24:41.467843       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1030 18:25:31.810771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:51372: use of closed network connection
	E1030 18:25:32.054856       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:51412: use of closed network connection
	I1030 18:25:41.257654       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.228.34"}
	I1030 18:25:52.928553       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1030 18:25:53.966628       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1030 18:25:58.419383       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1030 18:25:58.584375       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.85.39"}
	I1030 18:26:33.409720       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1030 18:26:47.590784       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1030 18:26:56.810697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.810774       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:56.844847       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.845012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:56.882009       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.882069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:56.890526       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:56.891695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 18:26:57.001448       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 18:26:57.001520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1030 18:26:57.890836       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1030 18:26:58.004401       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1030 18:26:58.014103       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1030 18:28:21.270798       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.236.65"}
	E1030 18:28:24.654477       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [3d74745cb948287a4c2cf27f2ba2eb0b19c2c042b7ce81a8795557cefc4bd271] <==
	E1030 18:29:18.166555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:29:22.396219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:29:22.396362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:29:35.907186       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:29:35.907314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:29:36.557727       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:29:36.557827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:29:56.960653       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:29:56.960712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:30:00.433789       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:30:00.433915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:30:27.363255       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:30:27.363316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:30:28.695013       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:30:28.695063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:30:35.024510       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:30:35.024570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:30:52.128842       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:30:52.129017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:31:06.029332       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:31:06.029488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:31:07.489558       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:31:07.489607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1030 18:31:10.687847       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 18:31:10.687955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8aabc7e519d19dd85ae69ce724ad72cbeb649f9379e4c3ee94f07ff46313ec53] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 18:22:37.146444       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 18:22:37.166670       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	E1030 18:22:37.166774       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 18:22:37.281324       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 18:22:37.281424       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 18:22:37.281459       1 server_linux.go:169] "Using iptables Proxier"
	I1030 18:22:37.283860       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 18:22:37.284181       1 server.go:483] "Version info" version="v1.31.2"
	I1030 18:22:37.284399       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 18:22:37.285532       1 config.go:199] "Starting service config controller"
	I1030 18:22:37.285569       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 18:22:37.285597       1 config.go:105] "Starting endpoint slice config controller"
	I1030 18:22:37.285601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 18:22:37.286087       1 config.go:328] "Starting node config controller"
	I1030 18:22:37.286100       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 18:22:37.387264       1 shared_informer.go:320] Caches are synced for node config
	I1030 18:22:37.387341       1 shared_informer.go:320] Caches are synced for service config
	I1030 18:22:37.387371       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [430e9b4f16ec17dd1e0c8f6f34aad5ddf7b5bb9c5ede50c4087533e9327fa8ac] <==
	W1030 18:22:27.802248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 18:22:27.802276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:27.802316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1030 18:22:27.802343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:27.802450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 18:22:27.803261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.678432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1030 18:22:28.678490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.726055       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1030 18:22:28.726203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.742669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1030 18:22:28.742772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.765397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1030 18:22:28.765501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.860238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 18:22:28.860289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.888181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 18:22:28.888279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.928590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 18:22:28.928646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:28.968766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1030 18:22:28.968820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:22:29.101911       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 18:22:29.101960       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1030 18:22:31.090200       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 18:30:30 addons-819803 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 18:30:30 addons-819803 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 18:30:30 addons-819803 kubelet[1205]: E1030 18:30:30.649751    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313030649472127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:30:30 addons-819803 kubelet[1205]: E1030 18:30:30.649774    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313030649472127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:30:40 addons-819803 kubelet[1205]: I1030 18:30:40.446244    1205 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sdqnr" secret="" err="secret \"gcp-auth\" not found"
	Oct 30 18:30:40 addons-819803 kubelet[1205]: E1030 18:30:40.652688    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313040652380498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:30:40 addons-819803 kubelet[1205]: E1030 18:30:40.652884    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313040652380498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:30:50 addons-819803 kubelet[1205]: E1030 18:30:50.655986    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313050655489255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:30:50 addons-819803 kubelet[1205]: E1030 18:30:50.656404    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313050655489255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:00 addons-819803 kubelet[1205]: E1030 18:31:00.659314    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313060658904667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:00 addons-819803 kubelet[1205]: E1030 18:31:00.659378    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313060658904667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:10 addons-819803 kubelet[1205]: E1030 18:31:10.661407    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313070661069481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:10 addons-819803 kubelet[1205]: E1030 18:31:10.661742    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313070661069481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:20 addons-819803 kubelet[1205]: E1030 18:31:20.667847    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313080664723118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:20 addons-819803 kubelet[1205]: E1030 18:31:20.667889    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313080664723118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:30 addons-819803 kubelet[1205]: E1030 18:31:30.460725    1205 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 18:31:30 addons-819803 kubelet[1205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 18:31:30 addons-819803 kubelet[1205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 18:31:30 addons-819803 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 18:31:30 addons-819803 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 18:31:30 addons-819803 kubelet[1205]: E1030 18:31:30.671926    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313090671543985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:30 addons-819803 kubelet[1205]: E1030 18:31:30.671974    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313090671543985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:40 addons-819803 kubelet[1205]: E1030 18:31:40.675192    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313100674828458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:40 addons-819803 kubelet[1205]: E1030 18:31:40.675515    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313100674828458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:31:41 addons-819803 kubelet[1205]: I1030 18:31:41.446190    1205 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [1f46b0ca80854c9d1a66f9fca2789c69b6e2acd673897114ae279a484bcf1a86] <==
	I1030 18:22:43.328375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 18:22:43.448434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 18:22:43.448514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 18:22:43.523347       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 18:22:43.525048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-819803_034d30ff-96f9-417a-8e53-a0e7c92aa4b7!
	I1030 18:22:43.538012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36e5771e-0220-43a1-9ab6-cb578de568ee", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-819803_034d30ff-96f9-417a-8e53-a0e7c92aa4b7 became leader
	I1030 18:22:43.625480       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-819803_034d30ff-96f9-417a-8e53-a0e7c92aa4b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-819803 -n addons-819803
helpers_test.go:261: (dbg) Run:  kubectl --context addons-819803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (364.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-819803
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-819803: exit status 82 (2m0.46694334s)

                                                
                                                
-- stdout --
	* Stopping node "addons-819803"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-819803" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-819803
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-819803: exit status 11 (21.625685282s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-819803" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-819803
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-819803: exit status 11 (6.143684091s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-819803" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-819803
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-819803: exit status 11 (6.143280875s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-819803" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 node stop m02 -v=7 --alsologtostderr
E1030 18:43:58.220107  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:44:39.181553  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:45:18.709350  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174833 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.477771004s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174833-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 18:43:56.737264  404122 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:43:56.737641  404122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:43:56.737655  404122 out.go:358] Setting ErrFile to fd 2...
	I1030 18:43:56.737662  404122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:43:56.738119  404122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:43:56.738754  404122 mustload.go:65] Loading cluster: ha-174833
	I1030 18:43:56.739450  404122 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:43:56.739473  404122 stop.go:39] StopHost: ha-174833-m02
	I1030 18:43:56.739938  404122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:43:56.740004  404122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:43:56.756907  404122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I1030 18:43:56.757551  404122 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:43:56.758182  404122 main.go:141] libmachine: Using API Version  1
	I1030 18:43:56.758223  404122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:43:56.758751  404122 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:43:56.761459  404122 out.go:177] * Stopping node "ha-174833-m02"  ...
	I1030 18:43:56.763175  404122 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 18:43:56.763208  404122 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:43:56.763468  404122 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 18:43:56.763495  404122 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:43:56.766458  404122 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:43:56.766959  404122 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:43:56.766992  404122 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:43:56.767167  404122 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:43:56.767324  404122 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:43:56.767475  404122 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:43:56.767656  404122 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:43:56.854664  404122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 18:43:56.908441  404122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 18:43:56.964983  404122 main.go:141] libmachine: Stopping "ha-174833-m02"...
	I1030 18:43:56.965021  404122 main.go:141] libmachine: (ha-174833-m02) Calling .GetState
	I1030 18:43:56.966884  404122 main.go:141] libmachine: (ha-174833-m02) Calling .Stop
	I1030 18:43:56.971044  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 0/120
	I1030 18:43:57.972509  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 1/120
	I1030 18:43:58.973742  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 2/120
	I1030 18:43:59.975012  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 3/120
	I1030 18:44:00.977235  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 4/120
	I1030 18:44:01.978988  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 5/120
	I1030 18:44:02.980247  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 6/120
	I1030 18:44:03.981581  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 7/120
	I1030 18:44:04.982788  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 8/120
	I1030 18:44:05.984876  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 9/120
	I1030 18:44:06.987246  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 10/120
	I1030 18:44:07.988549  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 11/120
	I1030 18:44:08.989921  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 12/120
	I1030 18:44:09.991319  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 13/120
	I1030 18:44:10.992677  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 14/120
	I1030 18:44:11.994332  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 15/120
	I1030 18:44:12.995877  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 16/120
	I1030 18:44:13.998239  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 17/120
	I1030 18:44:15.000719  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 18/120
	I1030 18:44:16.002193  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 19/120
	I1030 18:44:17.004204  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 20/120
	I1030 18:44:18.005957  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 21/120
	I1030 18:44:19.007382  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 22/120
	I1030 18:44:20.008930  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 23/120
	I1030 18:44:21.010224  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 24/120
	I1030 18:44:22.012667  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 25/120
	I1030 18:44:23.013928  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 26/120
	I1030 18:44:24.015263  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 27/120
	I1030 18:44:25.016734  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 28/120
	I1030 18:44:26.018015  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 29/120
	I1030 18:44:27.020151  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 30/120
	I1030 18:44:28.021585  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 31/120
	I1030 18:44:29.023086  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 32/120
	I1030 18:44:30.024991  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 33/120
	I1030 18:44:31.026434  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 34/120
	I1030 18:44:32.028697  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 35/120
	I1030 18:44:33.029869  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 36/120
	I1030 18:44:34.031437  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 37/120
	I1030 18:44:35.032776  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 38/120
	I1030 18:44:36.034137  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 39/120
	I1030 18:44:37.036171  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 40/120
	I1030 18:44:38.037513  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 41/120
	I1030 18:44:39.039391  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 42/120
	I1030 18:44:40.041371  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 43/120
	I1030 18:44:41.042858  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 44/120
	I1030 18:44:42.044319  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 45/120
	I1030 18:44:43.046170  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 46/120
	I1030 18:44:44.047796  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 47/120
	I1030 18:44:45.049112  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 48/120
	I1030 18:44:46.050575  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 49/120
	I1030 18:44:47.052719  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 50/120
	I1030 18:44:48.054337  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 51/120
	I1030 18:44:49.055769  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 52/120
	I1030 18:44:50.057765  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 53/120
	I1030 18:44:51.059762  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 54/120
	I1030 18:44:52.061532  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 55/120
	I1030 18:44:53.063706  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 56/120
	I1030 18:44:54.065082  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 57/120
	I1030 18:44:55.066667  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 58/120
	I1030 18:44:56.068284  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 59/120
	I1030 18:44:57.070186  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 60/120
	I1030 18:44:58.071590  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 61/120
	I1030 18:44:59.072865  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 62/120
	I1030 18:45:00.074368  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 63/120
	I1030 18:45:01.075517  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 64/120
	I1030 18:45:02.077469  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 65/120
	I1030 18:45:03.079097  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 66/120
	I1030 18:45:04.081060  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 67/120
	I1030 18:45:05.082635  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 68/120
	I1030 18:45:06.084991  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 69/120
	I1030 18:45:07.086600  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 70/120
	I1030 18:45:08.087778  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 71/120
	I1030 18:45:09.089221  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 72/120
	I1030 18:45:10.090646  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 73/120
	I1030 18:45:11.093209  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 74/120
	I1030 18:45:12.094985  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 75/120
	I1030 18:45:13.096759  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 76/120
	I1030 18:45:14.097936  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 77/120
	I1030 18:45:15.099571  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 78/120
	I1030 18:45:16.100833  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 79/120
	I1030 18:45:17.102532  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 80/120
	I1030 18:45:18.103936  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 81/120
	I1030 18:45:19.105290  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 82/120
	I1030 18:45:20.107235  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 83/120
	I1030 18:45:21.109167  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 84/120
	I1030 18:45:22.111086  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 85/120
	I1030 18:45:23.113104  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 86/120
	I1030 18:45:24.114656  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 87/120
	I1030 18:45:25.115997  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 88/120
	I1030 18:45:26.117505  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 89/120
	I1030 18:45:27.119354  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 90/120
	I1030 18:45:28.121705  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 91/120
	I1030 18:45:29.123158  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 92/120
	I1030 18:45:30.124912  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 93/120
	I1030 18:45:31.126363  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 94/120
	I1030 18:45:32.128671  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 95/120
	I1030 18:45:33.129928  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 96/120
	I1030 18:45:34.131328  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 97/120
	I1030 18:45:35.132942  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 98/120
	I1030 18:45:36.134659  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 99/120
	I1030 18:45:37.136940  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 100/120
	I1030 18:45:38.138345  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 101/120
	I1030 18:45:39.139594  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 102/120
	I1030 18:45:40.141114  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 103/120
	I1030 18:45:41.142514  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 104/120
	I1030 18:45:42.144360  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 105/120
	I1030 18:45:43.145821  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 106/120
	I1030 18:45:44.147051  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 107/120
	I1030 18:45:45.149052  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 108/120
	I1030 18:45:46.150449  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 109/120
	I1030 18:45:47.152621  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 110/120
	I1030 18:45:48.153927  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 111/120
	I1030 18:45:49.155363  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 112/120
	I1030 18:45:50.157117  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 113/120
	I1030 18:45:51.158497  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 114/120
	I1030 18:45:52.159900  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 115/120
	I1030 18:45:53.161274  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 116/120
	I1030 18:45:54.162833  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 117/120
	I1030 18:45:55.164961  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 118/120
	I1030 18:45:56.166215  404122 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 119/120
	I1030 18:45:57.167508  404122 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1030 18:45:57.167665  404122 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-174833 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
E1030 18:46:01.104919  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr: (18.895544088s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174833 -n ha-174833
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 logs -n 25: (1.407401235s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m03_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m04 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp testdata/cp-test.txt                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m04_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03:/home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m03 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174833 node stop m02 -v=7                                                     | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:39:13
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:39:13.284465  400041 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:39:13.284583  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284591  400041 out.go:358] Setting ErrFile to fd 2...
	I1030 18:39:13.284596  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284767  400041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:39:13.285341  400041 out.go:352] Setting JSON to false
	I1030 18:39:13.286279  400041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8496,"bootTime":1730305057,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:39:13.286383  400041 start.go:139] virtualization: kvm guest
	I1030 18:39:13.288640  400041 out.go:177] * [ha-174833] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:39:13.290653  400041 notify.go:220] Checking for updates...
	I1030 18:39:13.290717  400041 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:39:13.292349  400041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:39:13.293858  400041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:13.295309  400041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.296710  400041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:39:13.298107  400041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:39:13.299548  400041 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:39:13.333903  400041 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 18:39:13.335174  400041 start.go:297] selected driver: kvm2
	I1030 18:39:13.335194  400041 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:39:13.335206  400041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:39:13.335896  400041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.336007  400041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:39:13.350868  400041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:39:13.350946  400041 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:39:13.351232  400041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:39:13.351271  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:13.351324  400041 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1030 18:39:13.351332  400041 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 18:39:13.351398  400041 start.go:340] cluster config:
	{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:13.351547  400041 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.353340  400041 out.go:177] * Starting "ha-174833" primary control-plane node in "ha-174833" cluster
	I1030 18:39:13.354531  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:13.354568  400041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:39:13.354580  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:13.354663  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:13.354676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:13.355016  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:13.355043  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json: {Name:mkc5b46cd8e85bcdd2d75c56d8807d384c7babe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:13.355179  400041 start.go:360] acquireMachinesLock for ha-174833: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:13.355220  400041 start.go:364] duration metric: took 25.55µs to acquireMachinesLock for "ha-174833"
	I1030 18:39:13.355242  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:13.355302  400041 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 18:39:13.356866  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:13.357003  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:13.357058  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:13.371132  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I1030 18:39:13.371590  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:13.372159  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:13.372180  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:13.372504  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:13.372689  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:13.372808  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:13.372956  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:13.372989  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:13.373021  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:13.373056  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373078  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373144  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:13.373168  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373183  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373207  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:13.373219  400041 main.go:141] libmachine: (ha-174833) Calling .PreCreateCheck
	I1030 18:39:13.373569  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:13.373996  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:13.374012  400041 main.go:141] libmachine: (ha-174833) Calling .Create
	I1030 18:39:13.374145  400041 main.go:141] libmachine: (ha-174833) Creating KVM machine...
	I1030 18:39:13.375320  400041 main.go:141] libmachine: (ha-174833) DBG | found existing default KVM network
	I1030 18:39:13.375998  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.375838  400064 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1030 18:39:13.376021  400041 main.go:141] libmachine: (ha-174833) DBG | created network xml: 
	I1030 18:39:13.376034  400041 main.go:141] libmachine: (ha-174833) DBG | <network>
	I1030 18:39:13.376048  400041 main.go:141] libmachine: (ha-174833) DBG |   <name>mk-ha-174833</name>
	I1030 18:39:13.376057  400041 main.go:141] libmachine: (ha-174833) DBG |   <dns enable='no'/>
	I1030 18:39:13.376066  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376076  400041 main.go:141] libmachine: (ha-174833) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1030 18:39:13.376085  400041 main.go:141] libmachine: (ha-174833) DBG |     <dhcp>
	I1030 18:39:13.376097  400041 main.go:141] libmachine: (ha-174833) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1030 18:39:13.376112  400041 main.go:141] libmachine: (ha-174833) DBG |     </dhcp>
	I1030 18:39:13.376121  400041 main.go:141] libmachine: (ha-174833) DBG |   </ip>
	I1030 18:39:13.376134  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376145  400041 main.go:141] libmachine: (ha-174833) DBG | </network>
	I1030 18:39:13.376153  400041 main.go:141] libmachine: (ha-174833) DBG | 
	I1030 18:39:13.380994  400041 main.go:141] libmachine: (ha-174833) DBG | trying to create private KVM network mk-ha-174833 192.168.39.0/24...
	I1030 18:39:13.444397  400041 main.go:141] libmachine: (ha-174833) DBG | private KVM network mk-ha-174833 192.168.39.0/24 created
	I1030 18:39:13.444439  400041 main.go:141] libmachine: (ha-174833) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.444454  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.444367  400064 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.444474  400041 main.go:141] libmachine: (ha-174833) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:13.444565  400041 main.go:141] libmachine: (ha-174833) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:13.725521  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.725350  400064 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa...
	I1030 18:39:13.832228  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832066  400064 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk...
	I1030 18:39:13.832262  400041 main.go:141] libmachine: (ha-174833) DBG | Writing magic tar header
	I1030 18:39:13.832279  400041 main.go:141] libmachine: (ha-174833) DBG | Writing SSH key tar header
	I1030 18:39:13.832291  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832203  400064 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.832302  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833
	I1030 18:39:13.832373  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 (perms=drwx------)
	I1030 18:39:13.832401  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:13.832414  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:13.832431  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.832442  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:13.832452  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:13.832462  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:13.832473  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:13.832490  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:13.832506  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home
	I1030 18:39:13.832517  400041 main.go:141] libmachine: (ha-174833) DBG | Skipping /home - not owner
	I1030 18:39:13.832528  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:13.832538  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:13.832550  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:13.833717  400041 main.go:141] libmachine: (ha-174833) define libvirt domain using xml: 
	I1030 18:39:13.833738  400041 main.go:141] libmachine: (ha-174833) <domain type='kvm'>
	I1030 18:39:13.833744  400041 main.go:141] libmachine: (ha-174833)   <name>ha-174833</name>
	I1030 18:39:13.833752  400041 main.go:141] libmachine: (ha-174833)   <memory unit='MiB'>2200</memory>
	I1030 18:39:13.833758  400041 main.go:141] libmachine: (ha-174833)   <vcpu>2</vcpu>
	I1030 18:39:13.833762  400041 main.go:141] libmachine: (ha-174833)   <features>
	I1030 18:39:13.833766  400041 main.go:141] libmachine: (ha-174833)     <acpi/>
	I1030 18:39:13.833770  400041 main.go:141] libmachine: (ha-174833)     <apic/>
	I1030 18:39:13.833774  400041 main.go:141] libmachine: (ha-174833)     <pae/>
	I1030 18:39:13.833794  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.833807  400041 main.go:141] libmachine: (ha-174833)   </features>
	I1030 18:39:13.833814  400041 main.go:141] libmachine: (ha-174833)   <cpu mode='host-passthrough'>
	I1030 18:39:13.833838  400041 main.go:141] libmachine: (ha-174833)   
	I1030 18:39:13.833857  400041 main.go:141] libmachine: (ha-174833)   </cpu>
	I1030 18:39:13.833863  400041 main.go:141] libmachine: (ha-174833)   <os>
	I1030 18:39:13.833868  400041 main.go:141] libmachine: (ha-174833)     <type>hvm</type>
	I1030 18:39:13.833884  400041 main.go:141] libmachine: (ha-174833)     <boot dev='cdrom'/>
	I1030 18:39:13.833888  400041 main.go:141] libmachine: (ha-174833)     <boot dev='hd'/>
	I1030 18:39:13.833904  400041 main.go:141] libmachine: (ha-174833)     <bootmenu enable='no'/>
	I1030 18:39:13.833912  400041 main.go:141] libmachine: (ha-174833)   </os>
	I1030 18:39:13.833917  400041 main.go:141] libmachine: (ha-174833)   <devices>
	I1030 18:39:13.833922  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='cdrom'>
	I1030 18:39:13.834007  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/boot2docker.iso'/>
	I1030 18:39:13.834043  400041 main.go:141] libmachine: (ha-174833)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:13.834066  400041 main.go:141] libmachine: (ha-174833)       <readonly/>
	I1030 18:39:13.834080  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834092  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='disk'>
	I1030 18:39:13.834107  400041 main.go:141] libmachine: (ha-174833)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:13.834134  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk'/>
	I1030 18:39:13.834146  400041 main.go:141] libmachine: (ha-174833)       <target dev='hda' bus='virtio'/>
	I1030 18:39:13.834163  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834179  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834191  400041 main.go:141] libmachine: (ha-174833)       <source network='mk-ha-174833'/>
	I1030 18:39:13.834199  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834204  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834213  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834219  400041 main.go:141] libmachine: (ha-174833)       <source network='default'/>
	I1030 18:39:13.834228  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834233  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834244  400041 main.go:141] libmachine: (ha-174833)     <serial type='pty'>
	I1030 18:39:13.834261  400041 main.go:141] libmachine: (ha-174833)       <target port='0'/>
	I1030 18:39:13.834275  400041 main.go:141] libmachine: (ha-174833)     </serial>
	I1030 18:39:13.834287  400041 main.go:141] libmachine: (ha-174833)     <console type='pty'>
	I1030 18:39:13.834295  400041 main.go:141] libmachine: (ha-174833)       <target type='serial' port='0'/>
	I1030 18:39:13.834310  400041 main.go:141] libmachine: (ha-174833)     </console>
	I1030 18:39:13.834320  400041 main.go:141] libmachine: (ha-174833)     <rng model='virtio'>
	I1030 18:39:13.834333  400041 main.go:141] libmachine: (ha-174833)       <backend model='random'>/dev/random</backend>
	I1030 18:39:13.834342  400041 main.go:141] libmachine: (ha-174833)     </rng>
	I1030 18:39:13.834351  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834359  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834368  400041 main.go:141] libmachine: (ha-174833)   </devices>
	I1030 18:39:13.834377  400041 main.go:141] libmachine: (ha-174833) </domain>
	I1030 18:39:13.834388  400041 main.go:141] libmachine: (ha-174833) 
	I1030 18:39:13.838852  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:67:40:5d in network default
	I1030 18:39:13.839421  400041 main.go:141] libmachine: (ha-174833) Ensuring networks are active...
	I1030 18:39:13.839441  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:13.840041  400041 main.go:141] libmachine: (ha-174833) Ensuring network default is active
	I1030 18:39:13.840342  400041 main.go:141] libmachine: (ha-174833) Ensuring network mk-ha-174833 is active
	I1030 18:39:13.840783  400041 main.go:141] libmachine: (ha-174833) Getting domain xml...
	I1030 18:39:13.841490  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:15.028258  400041 main.go:141] libmachine: (ha-174833) Waiting to get IP...
	I1030 18:39:15.029201  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.029564  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.029614  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.029561  400064 retry.go:31] will retry after 241.896456ms: waiting for machine to come up
	I1030 18:39:15.272995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.273461  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.273488  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.273413  400064 retry.go:31] will retry after 260.838664ms: waiting for machine to come up
	I1030 18:39:15.535845  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.536295  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.536316  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.536255  400064 retry.go:31] will retry after 479.733534ms: waiting for machine to come up
	I1030 18:39:16.017897  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.018269  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.018294  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.018228  400064 retry.go:31] will retry after 392.371571ms: waiting for machine to come up
	I1030 18:39:16.412626  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.413050  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.413080  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.412991  400064 retry.go:31] will retry after 692.689396ms: waiting for machine to come up
	I1030 18:39:17.106954  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.107478  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.107955  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.107422  400064 retry.go:31] will retry after 832.987361ms: waiting for machine to come up
	I1030 18:39:17.942300  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.942709  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.942756  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.942670  400064 retry.go:31] will retry after 1.191938703s: waiting for machine to come up
	I1030 18:39:19.135752  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:19.136105  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:19.136132  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:19.136082  400064 retry.go:31] will retry after 978.475739ms: waiting for machine to come up
	I1030 18:39:20.116239  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:20.116734  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:20.116762  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:20.116673  400064 retry.go:31] will retry after 1.671512667s: waiting for machine to come up
	I1030 18:39:21.790628  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:21.791129  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:21.791157  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:21.791069  400064 retry.go:31] will retry after 2.145808112s: waiting for machine to come up
	I1030 18:39:23.938308  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:23.938724  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:23.938750  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:23.938677  400064 retry.go:31] will retry after 2.206607406s: waiting for machine to come up
	I1030 18:39:26.148104  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:26.148464  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:26.148498  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:26.148437  400064 retry.go:31] will retry after 3.57155807s: waiting for machine to come up
	I1030 18:39:29.721895  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:29.722283  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:29.722306  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:29.722235  400064 retry.go:31] will retry after 4.087469223s: waiting for machine to come up
	I1030 18:39:33.811039  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811489  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has current primary IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811515  400041 main.go:141] libmachine: (ha-174833) Found IP for machine: 192.168.39.141
	I1030 18:39:33.811524  400041 main.go:141] libmachine: (ha-174833) Reserving static IP address...
	I1030 18:39:33.811821  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find host DHCP lease matching {name: "ha-174833", mac: "52:54:00:fd:5e:ca", ip: "192.168.39.141"} in network mk-ha-174833
	I1030 18:39:33.884143  400041 main.go:141] libmachine: (ha-174833) Reserved static IP address: 192.168.39.141
	I1030 18:39:33.884173  400041 main.go:141] libmachine: (ha-174833) DBG | Getting to WaitForSSH function...
	I1030 18:39:33.884180  400041 main.go:141] libmachine: (ha-174833) Waiting for SSH to be available...
	I1030 18:39:33.886594  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.886971  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:33.886995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.887140  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH client type: external
	I1030 18:39:33.887229  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa (-rw-------)
	I1030 18:39:33.887264  400041 main.go:141] libmachine: (ha-174833) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:39:33.887276  400041 main.go:141] libmachine: (ha-174833) DBG | About to run SSH command:
	I1030 18:39:33.887284  400041 main.go:141] libmachine: (ha-174833) DBG | exit 0
	I1030 18:39:34.010284  400041 main.go:141] libmachine: (ha-174833) DBG | SSH cmd err, output: <nil>: 
	I1030 18:39:34.010612  400041 main.go:141] libmachine: (ha-174833) KVM machine creation complete!
	I1030 18:39:34.010940  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:34.011543  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011721  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011891  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:39:34.011905  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:34.013168  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:39:34.013181  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:39:34.013186  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:39:34.013192  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.015485  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015842  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.015874  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015997  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.016168  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016323  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016452  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.016738  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.016961  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.016974  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:39:34.117708  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.117732  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:39:34.117739  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.120384  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120816  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.120860  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120990  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.121177  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121322  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121422  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.121534  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.121721  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.121734  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:39:34.222936  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:39:34.223027  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:39:34.223040  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:39:34.223052  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223321  400041 buildroot.go:166] provisioning hostname "ha-174833"
	I1030 18:39:34.223356  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223546  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.225998  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226300  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.226323  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226503  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.226662  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226803  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226914  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.227040  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.227266  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.227279  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833 && echo "ha-174833" | sudo tee /etc/hostname
	I1030 18:39:34.340995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:39:34.341029  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.343841  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344138  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.344167  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344368  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.344558  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344679  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344790  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.344900  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.345070  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.345090  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:39:34.455073  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.455103  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:39:34.455126  400041 buildroot.go:174] setting up certificates
	I1030 18:39:34.455146  400041 provision.go:84] configureAuth start
	I1030 18:39:34.455156  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.455453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:34.458160  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458507  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.458546  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458737  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.461111  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461454  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.461482  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461548  400041 provision.go:143] copyHostCerts
	I1030 18:39:34.461581  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461633  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:39:34.461648  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461713  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:39:34.461793  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461811  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:39:34.461816  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461840  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:39:34.461880  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461896  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:39:34.461902  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461922  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:39:34.461968  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833 san=[127.0.0.1 192.168.39.141 ha-174833 localhost minikube]
	I1030 18:39:34.715502  400041 provision.go:177] copyRemoteCerts
	I1030 18:39:34.715567  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:39:34.715593  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.718337  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718724  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.718750  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.719124  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.719316  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.719438  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:34.802134  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:39:34.802247  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:39:34.830405  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:39:34.830495  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:39:34.853312  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:39:34.853400  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1030 18:39:34.876622  400041 provision.go:87] duration metric: took 421.460858ms to configureAuth
	I1030 18:39:34.876654  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:39:34.876860  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:34.876973  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.879465  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.879875  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.879918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.880033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.880249  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880401  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880547  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.880711  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.880901  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.880922  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:39:35.107739  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:39:35.107767  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:39:35.107789  400041 main.go:141] libmachine: (ha-174833) Calling .GetURL
	I1030 18:39:35.109044  400041 main.go:141] libmachine: (ha-174833) DBG | Using libvirt version 6000000
	I1030 18:39:35.111179  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111531  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.111555  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111678  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:39:35.111690  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:39:35.111698  400041 client.go:171] duration metric: took 21.738698891s to LocalClient.Create
	I1030 18:39:35.111719  400041 start.go:167] duration metric: took 21.738765345s to libmachine.API.Create "ha-174833"
	I1030 18:39:35.111730  400041 start.go:293] postStartSetup for "ha-174833" (driver="kvm2")
	I1030 18:39:35.111740  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:39:35.111756  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.111994  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:39:35.112024  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.114247  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114535  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.114564  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114645  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.114802  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.114905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.115037  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.197105  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:39:35.201419  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:39:35.201446  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:39:35.201521  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:39:35.201638  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:39:35.201653  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:39:35.201776  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:39:35.211530  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:35.234121  400041 start.go:296] duration metric: took 122.377861ms for postStartSetup
	I1030 18:39:35.234182  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:35.234814  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.237333  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237649  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.237675  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237930  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:35.238105  400041 start.go:128] duration metric: took 21.882791468s to createHost
	I1030 18:39:35.238129  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.240449  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240793  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.240819  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240925  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.241105  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241241  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241360  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.241504  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:35.241675  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:35.241684  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:39:35.343143  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313575.316321849
	
	I1030 18:39:35.343172  400041 fix.go:216] guest clock: 1730313575.316321849
	I1030 18:39:35.343179  400041 fix.go:229] Guest: 2024-10-30 18:39:35.316321849 +0000 UTC Remote: 2024-10-30 18:39:35.238116722 +0000 UTC m=+21.992904276 (delta=78.205127ms)
	I1030 18:39:35.343224  400041 fix.go:200] guest clock delta is within tolerance: 78.205127ms
	I1030 18:39:35.343236  400041 start.go:83] releasing machines lock for "ha-174833", held for 21.988006549s
	I1030 18:39:35.343264  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.343537  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.345918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346202  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.346227  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346384  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.346845  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347029  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347110  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:39:35.347154  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.347263  400041 ssh_runner.go:195] Run: cat /version.json
	I1030 18:39:35.347290  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.349953  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350154  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350349  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350372  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350476  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350518  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350532  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350712  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.350796  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350983  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.351121  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.351158  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351287  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.446752  400041 ssh_runner.go:195] Run: systemctl --version
	I1030 18:39:35.452799  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:39:35.607404  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:39:35.613689  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:39:35.613765  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:39:35.629322  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:39:35.629356  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:39:35.629426  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:39:35.645369  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:39:35.659484  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:39:35.659560  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:39:35.673617  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:39:35.686829  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:39:35.798982  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:39:35.961093  400041 docker.go:233] disabling docker service ...
	I1030 18:39:35.961203  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:39:35.975451  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:39:35.987814  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:39:36.096019  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:39:36.200364  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:39:36.213767  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:39:36.231649  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:39:36.231720  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.241504  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:39:36.241612  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.251200  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.260995  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.270677  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:39:36.280585  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.290337  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.306289  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.316095  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:39:36.325059  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:39:36.325116  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:39:36.338276  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:39:36.347428  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:36.458431  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:39:36.549399  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:39:36.549481  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:39:36.554177  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:39:36.554235  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:39:36.557819  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:39:36.597751  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:39:36.597863  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.625326  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.656926  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:39:36.658453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:36.661076  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661520  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:36.661551  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661753  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:39:36.665623  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:36.678283  400041 kubeadm.go:883] updating cluster {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:39:36.678415  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:36.678476  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:36.710390  400041 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 18:39:36.710476  400041 ssh_runner.go:195] Run: which lz4
	I1030 18:39:36.714335  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1030 18:39:36.714421  400041 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 18:39:36.718401  400041 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 18:39:36.718426  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 18:39:37.991420  400041 crio.go:462] duration metric: took 1.277020496s to copy over tarball
	I1030 18:39:37.991500  400041 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 18:39:40.058678  400041 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.067148582s)
	I1030 18:39:40.058707  400041 crio.go:469] duration metric: took 2.067258506s to extract the tarball
	I1030 18:39:40.058717  400041 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 18:39:40.095680  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:40.139024  400041 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:39:40.139051  400041 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:39:40.139060  400041 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.2 crio true true} ...
	I1030 18:39:40.139194  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:39:40.139268  400041 ssh_runner.go:195] Run: crio config
	I1030 18:39:40.182736  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:40.182762  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:40.182776  400041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:39:40.182809  400041 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174833 NodeName:ha-174833 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:39:40.182965  400041 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174833"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:39:40.182991  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:39:40.183041  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:39:40.198922  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:39:40.199067  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:39:40.199141  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:39:40.208739  400041 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:39:40.208814  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1030 18:39:40.217747  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1030 18:39:40.233431  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:39:40.249487  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1030 18:39:40.265703  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1030 18:39:40.282041  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:39:40.285892  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:40.297652  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:40.407338  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:39:40.424747  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.141
	I1030 18:39:40.424777  400041 certs.go:194] generating shared ca certs ...
	I1030 18:39:40.424817  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.425024  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:39:40.425082  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:39:40.425095  400041 certs.go:256] generating profile certs ...
	I1030 18:39:40.425172  400041 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:39:40.425193  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt with IP's: []
	I1030 18:39:40.472361  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt ...
	I1030 18:39:40.472390  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt: {Name:mkc5230ad33247edd4a8c72c6c48a87fa9cedd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472564  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key ...
	I1030 18:39:40.472575  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key: {Name:mk2476b29598bb2a9232a00c23240eb0f41fcc47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472659  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0
	I1030 18:39:40.472675  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.254]
	I1030 18:39:40.623668  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 ...
	I1030 18:39:40.623703  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0: {Name:mk527af1a36a41edb105de0ac73afcba6a07951e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623865  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 ...
	I1030 18:39:40.623878  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0: {Name:mk9d3db1edca5a6647774a57300dfc12ee759cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623943  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:39:40.624014  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:39:40.624064  400041 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:39:40.624080  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt with IP's: []
	I1030 18:39:40.681800  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt ...
	I1030 18:39:40.681833  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt: {Name:mke6c9a4a487817027f382c9db962d8a5023b692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.681991  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key ...
	I1030 18:39:40.682001  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key: {Name:mkcef517ac3b25f9738ab0dc212031ff215f0337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.682069  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:39:40.682086  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:39:40.682097  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:39:40.682118  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:39:40.682131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:39:40.682142  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:39:40.682154  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:39:40.682166  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:39:40.682213  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:39:40.682246  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:39:40.682256  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:39:40.682279  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:39:40.682301  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:39:40.682325  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:39:40.682365  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:40.682398  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.682412  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:40.682432  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:39:40.683028  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:39:40.708651  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:39:40.731313  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:39:40.753734  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:39:40.776131  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 18:39:40.799436  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:39:40.822746  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:39:40.845786  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:39:40.869789  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:39:40.893594  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:39:40.916381  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:39:40.939683  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:39:40.956310  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:39:40.962024  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:39:40.972261  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976598  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976650  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.982403  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:39:40.992755  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:39:41.003221  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007653  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007709  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.013218  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:39:41.023594  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:39:41.033911  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038607  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038673  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.044095  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:39:41.054143  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:39:41.058096  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:39:41.058161  400041 kubeadm.go:392] StartCluster: {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:41.058251  400041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:39:41.058301  400041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:39:41.095584  400041 cri.go:89] found id: ""
	I1030 18:39:41.095649  400041 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 18:39:41.105071  400041 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 18:39:41.114164  400041 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 18:39:41.122895  400041 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 18:39:41.122908  400041 kubeadm.go:157] found existing configuration files:
	
	I1030 18:39:41.122941  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 18:39:41.131529  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 18:39:41.131566  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 18:39:41.140275  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 18:39:41.148757  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 18:39:41.148813  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 18:39:41.160794  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.184302  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 18:39:41.184383  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.207263  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 18:39:41.228026  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 18:39:41.228102  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 18:39:41.237111  400041 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 18:39:41.445375  400041 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 18:39:52.585541  400041 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 18:39:52.585616  400041 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 18:39:52.585710  400041 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 18:39:52.585832  400041 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 18:39:52.585956  400041 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 18:39:52.586025  400041 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 18:39:52.587620  400041 out.go:235]   - Generating certificates and keys ...
	I1030 18:39:52.587688  400041 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 18:39:52.587761  400041 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 18:39:52.587836  400041 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 18:39:52.587896  400041 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 18:39:52.587987  400041 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 18:39:52.588061  400041 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 18:39:52.588139  400041 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 18:39:52.588270  400041 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588347  400041 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 18:39:52.588511  400041 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588616  400041 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 18:39:52.588707  400041 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 18:39:52.588773  400041 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 18:39:52.588839  400041 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 18:39:52.588887  400041 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 18:39:52.588932  400041 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 18:39:52.589004  400041 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 18:39:52.589094  400041 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 18:39:52.589146  400041 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 18:39:52.589229  400041 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 18:39:52.589332  400041 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 18:39:52.590758  400041 out.go:235]   - Booting up control plane ...
	I1030 18:39:52.590844  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 18:39:52.590916  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 18:39:52.590968  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 18:39:52.591065  400041 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 18:39:52.591191  400041 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 18:39:52.591253  400041 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 18:39:52.591410  400041 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 18:39:52.591536  400041 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 18:39:52.591616  400041 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003124871s
	I1030 18:39:52.591709  400041 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 18:39:52.591794  400041 kubeadm.go:310] [api-check] The API server is healthy after 5.662047877s
	I1030 18:39:52.591944  400041 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 18:39:52.592125  400041 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 18:39:52.592192  400041 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 18:39:52.592401  400041 kubeadm.go:310] [mark-control-plane] Marking the node ha-174833 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 18:39:52.592456  400041 kubeadm.go:310] [bootstrap-token] Using token: g2rz2p.8nzvncljb4xmvqws
	I1030 18:39:52.593760  400041 out.go:235]   - Configuring RBAC rules ...
	I1030 18:39:52.593856  400041 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 18:39:52.593940  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 18:39:52.594118  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 18:39:52.594304  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 18:39:52.594473  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 18:39:52.594624  400041 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 18:39:52.594785  400041 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 18:39:52.594849  400041 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 18:39:52.594921  400041 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 18:39:52.594940  400041 kubeadm.go:310] 
	I1030 18:39:52.594996  400041 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 18:39:52.595002  400041 kubeadm.go:310] 
	I1030 18:39:52.595066  400041 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 18:39:52.595072  400041 kubeadm.go:310] 
	I1030 18:39:52.595106  400041 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 18:39:52.595167  400041 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 18:39:52.595211  400041 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 18:39:52.595217  400041 kubeadm.go:310] 
	I1030 18:39:52.595262  400041 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 18:39:52.595268  400041 kubeadm.go:310] 
	I1030 18:39:52.595323  400041 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 18:39:52.595331  400041 kubeadm.go:310] 
	I1030 18:39:52.595374  400041 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 18:39:52.595436  400041 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 18:39:52.595501  400041 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 18:39:52.595508  400041 kubeadm.go:310] 
	I1030 18:39:52.595599  400041 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 18:39:52.595699  400041 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 18:39:52.595708  400041 kubeadm.go:310] 
	I1030 18:39:52.595831  400041 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.595945  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 18:39:52.595970  400041 kubeadm.go:310] 	--control-plane 
	I1030 18:39:52.595975  400041 kubeadm.go:310] 
	I1030 18:39:52.596043  400041 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 18:39:52.596049  400041 kubeadm.go:310] 
	I1030 18:39:52.596119  400041 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.596231  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 18:39:52.596243  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:52.596250  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:52.597696  400041 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1030 18:39:52.598955  400041 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 18:39:52.605469  400041 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1030 18:39:52.605483  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1030 18:39:52.624363  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 18:39:53.005173  400041 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833 minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=true
	I1030 18:39:53.173403  400041 ops.go:34] apiserver oom_adj: -16
	I1030 18:39:53.173409  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.674475  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.173792  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.673541  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.174225  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.674171  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.765485  400041 kubeadm.go:1113] duration metric: took 2.760286908s to wait for elevateKubeSystemPrivileges
	I1030 18:39:55.765536  400041 kubeadm.go:394] duration metric: took 14.707379512s to StartCluster
	I1030 18:39:55.765560  400041 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.765652  400041 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.766341  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.766618  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 18:39:55.766613  400041 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:55.766643  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:39:55.766652  400041 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 18:39:55.766742  400041 addons.go:69] Setting storage-provisioner=true in profile "ha-174833"
	I1030 18:39:55.766762  400041 addons.go:234] Setting addon storage-provisioner=true in "ha-174833"
	I1030 18:39:55.766765  400041 addons.go:69] Setting default-storageclass=true in profile "ha-174833"
	I1030 18:39:55.766787  400041 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174833"
	I1030 18:39:55.766793  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.766837  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:55.767201  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767204  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767229  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.767233  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.782451  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I1030 18:39:55.783028  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.783605  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.783632  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.783733  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I1030 18:39:55.784013  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.784063  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.784233  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.784551  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.784576  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.784948  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.785512  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.785543  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.786284  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.786639  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 18:39:55.787187  400041 cert_rotation.go:140] Starting client certificate rotation controller
	I1030 18:39:55.787507  400041 addons.go:234] Setting addon default-storageclass=true in "ha-174833"
	I1030 18:39:55.787549  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.787801  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.787828  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.801215  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I1030 18:39:55.801753  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.802347  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.802374  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.802582  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I1030 18:39:55.802754  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.802945  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.802995  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.803462  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.803485  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.803870  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.804468  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.804521  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.804806  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.807396  400041 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 18:39:55.808701  400041 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:55.808721  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 18:39:55.808736  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.812067  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812493  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.812517  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812683  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.812860  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.813040  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.813181  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.820594  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I1030 18:39:55.821053  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.821596  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.821614  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.821907  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.822100  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.823784  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.824021  400041 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.824035  400041 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 18:39:55.824050  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.826783  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827199  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.827215  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827366  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.827540  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.827698  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.827825  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.887739  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 18:39:55.976821  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.987770  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:56.358196  400041 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 18:39:56.358229  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358248  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358534  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358554  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358563  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358570  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358835  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.358837  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358856  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358917  400041 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 18:39:56.358934  400041 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 18:39:56.359097  400041 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1030 18:39:56.359111  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.359120  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.359128  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.431588  400041 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
	I1030 18:39:56.432175  400041 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1030 18:39:56.432191  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.432198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.432202  400041 round_trippers.go:473]     Content-Type: application/json
	I1030 18:39:56.432205  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.436115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:39:56.436287  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.436303  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.436618  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.436664  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.436672  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.590846  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.590868  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591203  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591227  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.591236  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.591244  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591478  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.591507  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591514  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.593000  400041 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1030 18:39:56.594031  400041 addons.go:510] duration metric: took 827.372801ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1030 18:39:56.594084  400041 start.go:246] waiting for cluster config update ...
	I1030 18:39:56.594100  400041 start.go:255] writing updated cluster config ...
	I1030 18:39:56.595822  400041 out.go:201] 
	I1030 18:39:56.597023  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:56.597115  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.598537  400041 out.go:177] * Starting "ha-174833-m02" control-plane node in "ha-174833" cluster
	I1030 18:39:56.599471  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:56.599502  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:56.599603  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:56.599621  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:56.599722  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.599927  400041 start.go:360] acquireMachinesLock for ha-174833-m02: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:56.599988  400041 start.go:364] duration metric: took 32.769µs to acquireMachinesLock for "ha-174833-m02"
	I1030 18:39:56.600025  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:56.600106  400041 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1030 18:39:56.601604  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:56.601698  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:56.601732  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:56.616291  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I1030 18:39:56.616777  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:56.617304  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:56.617323  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:56.617636  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:56.617791  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:39:56.617923  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:39:56.618073  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:56.618098  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:56.618131  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:56.618179  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618201  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618275  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:56.618304  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618320  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618344  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:56.618355  400041 main.go:141] libmachine: (ha-174833-m02) Calling .PreCreateCheck
	I1030 18:39:56.618511  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:39:56.618831  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:56.618844  400041 main.go:141] libmachine: (ha-174833-m02) Calling .Create
	I1030 18:39:56.618962  400041 main.go:141] libmachine: (ha-174833-m02) Creating KVM machine...
	I1030 18:39:56.620046  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing default KVM network
	I1030 18:39:56.620129  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing private KVM network mk-ha-174833
	I1030 18:39:56.620269  400041 main.go:141] libmachine: (ha-174833-m02) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:56.620295  400041 main.go:141] libmachine: (ha-174833-m02) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:56.620361  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.620250  400406 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:56.620446  400041 main.go:141] libmachine: (ha-174833-m02) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:56.895932  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.895765  400406 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa...
	I1030 18:39:57.037260  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037116  400406 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk...
	I1030 18:39:57.037293  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing magic tar header
	I1030 18:39:57.037303  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing SSH key tar header
	I1030 18:39:57.037311  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037233  400406 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:57.037321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02
	I1030 18:39:57.037404  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 (perms=drwx------)
	I1030 18:39:57.037429  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:57.037440  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:57.037453  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:57.037469  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:57.037479  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:57.037487  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:57.037494  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home
	I1030 18:39:57.037515  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Skipping /home - not owner
	I1030 18:39:57.037531  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:57.037546  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:57.037559  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:57.037569  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:57.037577  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:57.038511  400041 main.go:141] libmachine: (ha-174833-m02) define libvirt domain using xml: 
	I1030 18:39:57.038531  400041 main.go:141] libmachine: (ha-174833-m02) <domain type='kvm'>
	I1030 18:39:57.038538  400041 main.go:141] libmachine: (ha-174833-m02)   <name>ha-174833-m02</name>
	I1030 18:39:57.038542  400041 main.go:141] libmachine: (ha-174833-m02)   <memory unit='MiB'>2200</memory>
	I1030 18:39:57.038549  400041 main.go:141] libmachine: (ha-174833-m02)   <vcpu>2</vcpu>
	I1030 18:39:57.038556  400041 main.go:141] libmachine: (ha-174833-m02)   <features>
	I1030 18:39:57.038563  400041 main.go:141] libmachine: (ha-174833-m02)     <acpi/>
	I1030 18:39:57.038569  400041 main.go:141] libmachine: (ha-174833-m02)     <apic/>
	I1030 18:39:57.038577  400041 main.go:141] libmachine: (ha-174833-m02)     <pae/>
	I1030 18:39:57.038587  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.038594  400041 main.go:141] libmachine: (ha-174833-m02)   </features>
	I1030 18:39:57.038601  400041 main.go:141] libmachine: (ha-174833-m02)   <cpu mode='host-passthrough'>
	I1030 18:39:57.038605  400041 main.go:141] libmachine: (ha-174833-m02)   
	I1030 18:39:57.038610  400041 main.go:141] libmachine: (ha-174833-m02)   </cpu>
	I1030 18:39:57.038636  400041 main.go:141] libmachine: (ha-174833-m02)   <os>
	I1030 18:39:57.038660  400041 main.go:141] libmachine: (ha-174833-m02)     <type>hvm</type>
	I1030 18:39:57.038683  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='cdrom'/>
	I1030 18:39:57.038700  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='hd'/>
	I1030 18:39:57.038708  400041 main.go:141] libmachine: (ha-174833-m02)     <bootmenu enable='no'/>
	I1030 18:39:57.038712  400041 main.go:141] libmachine: (ha-174833-m02)   </os>
	I1030 18:39:57.038717  400041 main.go:141] libmachine: (ha-174833-m02)   <devices>
	I1030 18:39:57.038725  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='cdrom'>
	I1030 18:39:57.038734  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/boot2docker.iso'/>
	I1030 18:39:57.038744  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:57.038752  400041 main.go:141] libmachine: (ha-174833-m02)       <readonly/>
	I1030 18:39:57.038764  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038780  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='disk'>
	I1030 18:39:57.038790  400041 main.go:141] libmachine: (ha-174833-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:57.038805  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk'/>
	I1030 18:39:57.038815  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hda' bus='virtio'/>
	I1030 18:39:57.038825  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038832  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038844  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='mk-ha-174833'/>
	I1030 18:39:57.038858  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038874  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038892  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038901  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='default'/>
	I1030 18:39:57.038911  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038922  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038931  400041 main.go:141] libmachine: (ha-174833-m02)     <serial type='pty'>
	I1030 18:39:57.038937  400041 main.go:141] libmachine: (ha-174833-m02)       <target port='0'/>
	I1030 18:39:57.038943  400041 main.go:141] libmachine: (ha-174833-m02)     </serial>
	I1030 18:39:57.038948  400041 main.go:141] libmachine: (ha-174833-m02)     <console type='pty'>
	I1030 18:39:57.038955  400041 main.go:141] libmachine: (ha-174833-m02)       <target type='serial' port='0'/>
	I1030 18:39:57.038981  400041 main.go:141] libmachine: (ha-174833-m02)     </console>
	I1030 18:39:57.039004  400041 main.go:141] libmachine: (ha-174833-m02)     <rng model='virtio'>
	I1030 18:39:57.039017  400041 main.go:141] libmachine: (ha-174833-m02)       <backend model='random'>/dev/random</backend>
	I1030 18:39:57.039026  400041 main.go:141] libmachine: (ha-174833-m02)     </rng>
	I1030 18:39:57.039033  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039042  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039050  400041 main.go:141] libmachine: (ha-174833-m02)   </devices>
	I1030 18:39:57.039059  400041 main.go:141] libmachine: (ha-174833-m02) </domain>
	I1030 18:39:57.039073  400041 main.go:141] libmachine: (ha-174833-m02) 
	I1030 18:39:57.045751  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:a3:4c:dc in network default
	I1030 18:39:57.046326  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring networks are active...
	I1030 18:39:57.046349  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:57.047038  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network default is active
	I1030 18:39:57.047398  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network mk-ha-174833 is active
	I1030 18:39:57.047750  400041 main.go:141] libmachine: (ha-174833-m02) Getting domain xml...
	I1030 18:39:57.048296  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:58.272260  400041 main.go:141] libmachine: (ha-174833-m02) Waiting to get IP...
	I1030 18:39:58.273021  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.273425  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.273496  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.273425  400406 retry.go:31] will retry after 283.659874ms: waiting for machine to come up
	I1030 18:39:58.559077  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.559572  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.559595  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.559530  400406 retry.go:31] will retry after 285.421922ms: waiting for machine to come up
	I1030 18:39:58.847321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.847766  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.847795  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.847719  400406 retry.go:31] will retry after 459.416019ms: waiting for machine to come up
	I1030 18:39:59.308465  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.308944  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.309003  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.308931  400406 retry.go:31] will retry after 572.494843ms: waiting for machine to come up
	I1030 18:39:59.882664  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.883063  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.883097  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.883044  400406 retry.go:31] will retry after 513.18543ms: waiting for machine to come up
	I1030 18:40:00.397389  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:00.397747  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:00.397783  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:00.397729  400406 retry.go:31] will retry after 755.433082ms: waiting for machine to come up
	I1030 18:40:01.155395  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:01.155948  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:01.155979  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:01.155903  400406 retry.go:31] will retry after 1.038364995s: waiting for machine to come up
	I1030 18:40:02.195482  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:02.195950  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:02.195980  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:02.195911  400406 retry.go:31] will retry after 1.004508468s: waiting for machine to come up
	I1030 18:40:03.201825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:03.202261  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:03.202291  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:03.202205  400406 retry.go:31] will retry after 1.786384374s: waiting for machine to come up
	I1030 18:40:04.989943  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:04.990350  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:04.990371  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:04.990297  400406 retry.go:31] will retry after 1.895963981s: waiting for machine to come up
	I1030 18:40:06.888049  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:06.888464  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:06.888488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:06.888417  400406 retry.go:31] will retry after 1.948037202s: waiting for machine to come up
	I1030 18:40:08.839488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:08.839847  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:08.839869  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:08.839824  400406 retry.go:31] will retry after 3.202281785s: waiting for machine to come up
	I1030 18:40:12.043324  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:12.043675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:12.043695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:12.043618  400406 retry.go:31] will retry after 3.877667252s: waiting for machine to come up
	I1030 18:40:15.924012  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:15.924431  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:15.924456  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:15.924364  400406 retry.go:31] will retry after 3.471906375s: waiting for machine to come up
	I1030 18:40:19.399252  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has current primary IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399693  400041 main.go:141] libmachine: (ha-174833-m02) Found IP for machine: 192.168.39.67
	I1030 18:40:19.399744  400041 main.go:141] libmachine: (ha-174833-m02) Reserving static IP address...
	I1030 18:40:19.400103  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find host DHCP lease matching {name: "ha-174833-m02", mac: "52:54:00:87:fa:1a", ip: "192.168.39.67"} in network mk-ha-174833
	I1030 18:40:19.473268  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Getting to WaitForSSH function...
	I1030 18:40:19.473299  400041 main.go:141] libmachine: (ha-174833-m02) Reserved static IP address: 192.168.39.67
	I1030 18:40:19.473352  400041 main.go:141] libmachine: (ha-174833-m02) Waiting for SSH to be available...
	I1030 18:40:19.476054  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476545  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.476573  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476733  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH client type: external
	I1030 18:40:19.476781  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa (-rw-------)
	I1030 18:40:19.476820  400041 main.go:141] libmachine: (ha-174833-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:40:19.476836  400041 main.go:141] libmachine: (ha-174833-m02) DBG | About to run SSH command:
	I1030 18:40:19.476843  400041 main.go:141] libmachine: (ha-174833-m02) DBG | exit 0
	I1030 18:40:19.602200  400041 main.go:141] libmachine: (ha-174833-m02) DBG | SSH cmd err, output: <nil>: 
	I1030 18:40:19.602526  400041 main.go:141] libmachine: (ha-174833-m02) KVM machine creation complete!
	I1030 18:40:19.602867  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:19.603528  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603721  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603921  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:40:19.603937  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetState
	I1030 18:40:19.605043  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:40:19.605054  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:40:19.605059  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:40:19.605064  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.607164  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607533  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.607561  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607643  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.607921  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608107  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608292  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.608458  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.608704  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.608730  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:40:19.709697  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:19.709726  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:40:19.709734  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.712480  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.712863  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.712908  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.713089  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.713318  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713620  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.713800  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.714020  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.714034  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:40:19.823287  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:40:19.823400  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:40:19.823413  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:40:19.823424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823703  400041 buildroot.go:166] provisioning hostname "ha-174833-m02"
	I1030 18:40:19.823731  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823950  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.826635  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827060  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.827086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827137  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.827303  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827602  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.827740  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.827922  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.827936  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m02 && echo "ha-174833-m02" | sudo tee /etc/hostname
	I1030 18:40:19.945348  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m02
	
	I1030 18:40:19.945376  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.948392  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948756  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.948806  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948936  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.949124  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949286  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949399  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.949565  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.949742  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.949759  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:40:20.059828  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:20.059870  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:40:20.059905  400041 buildroot.go:174] setting up certificates
	I1030 18:40:20.059915  400041 provision.go:84] configureAuth start
	I1030 18:40:20.059930  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:20.060203  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.062825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063237  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.063262  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063417  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.065380  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.065725  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065881  400041 provision.go:143] copyHostCerts
	I1030 18:40:20.065925  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066007  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:40:20.066020  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066101  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:40:20.066211  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066236  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:40:20.066244  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066288  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:40:20.066357  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066380  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:40:20.066386  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066420  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:40:20.066508  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m02 san=[127.0.0.1 192.168.39.67 ha-174833-m02 localhost minikube]
	I1030 18:40:20.314819  400041 provision.go:177] copyRemoteCerts
	I1030 18:40:20.314902  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:40:20.314940  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.317541  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.317873  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.317916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.318094  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.318304  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.318547  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.318726  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.405714  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:40:20.405820  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:40:20.431726  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:40:20.431798  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:40:20.455138  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:40:20.455222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 18:40:20.477773  400041 provision.go:87] duration metric: took 417.842724ms to configureAuth
	I1030 18:40:20.477806  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:40:20.478018  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:20.478120  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.480885  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481224  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.481250  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.481637  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481775  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481966  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.482148  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.482322  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.482338  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:40:20.706339  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:40:20.706375  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:40:20.706387  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetURL
	I1030 18:40:20.707589  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using libvirt version 6000000
	I1030 18:40:20.709597  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.709934  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.709964  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.710106  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:40:20.710135  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:40:20.710147  400041 client.go:171] duration metric: took 24.092036555s to LocalClient.Create
	I1030 18:40:20.710176  400041 start.go:167] duration metric: took 24.092106335s to libmachine.API.Create "ha-174833"
	I1030 18:40:20.710186  400041 start.go:293] postStartSetup for "ha-174833-m02" (driver="kvm2")
	I1030 18:40:20.710195  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:40:20.710231  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.710468  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:40:20.710503  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.712432  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712689  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.712717  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712824  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.713017  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.713185  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.713308  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.793164  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:40:20.797557  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:40:20.797583  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:40:20.797648  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:40:20.797720  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:40:20.797730  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:40:20.797807  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:40:20.807375  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:20.830866  400041 start.go:296] duration metric: took 120.664021ms for postStartSetup
	I1030 18:40:20.830929  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:20.831701  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.834714  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.835116  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835438  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:40:20.835668  400041 start.go:128] duration metric: took 24.235548343s to createHost
	I1030 18:40:20.835700  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.837613  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.837888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.837916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.838041  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.838176  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838317  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.838592  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.838755  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.838765  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:40:20.939393  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313620.914818123
	
	I1030 18:40:20.939419  400041 fix.go:216] guest clock: 1730313620.914818123
	I1030 18:40:20.939430  400041 fix.go:229] Guest: 2024-10-30 18:40:20.914818123 +0000 UTC Remote: 2024-10-30 18:40:20.835684734 +0000 UTC m=+67.590472244 (delta=79.133389ms)
	I1030 18:40:20.939453  400041 fix.go:200] guest clock delta is within tolerance: 79.133389ms
	I1030 18:40:20.939460  400041 start.go:83] releasing machines lock for "ha-174833-m02", held for 24.339459492s
	I1030 18:40:20.939487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.939802  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.942445  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.942801  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.942827  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.945268  400041 out.go:177] * Found network options:
	I1030 18:40:20.946721  400041 out.go:177]   - NO_PROXY=192.168.39.141
	W1030 18:40:20.947877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.947925  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948482  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948657  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948763  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:40:20.948808  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	W1030 18:40:20.948877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.948974  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:40:20.948998  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.951510  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951591  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951860  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951890  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951926  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.952047  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952193  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952262  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952409  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952476  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952535  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952595  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.952723  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:21.182304  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:40:21.188738  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:40:21.188808  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:40:21.205984  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:40:21.206007  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:40:21.206074  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:40:21.221839  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:40:21.235753  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:40:21.235807  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:40:21.249998  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:40:21.263401  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:40:21.372667  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:40:21.535477  400041 docker.go:233] disabling docker service ...
	I1030 18:40:21.535567  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:40:21.549384  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:40:21.561708  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:40:21.680746  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:40:21.800498  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:40:21.815096  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:40:21.833550  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:40:21.833622  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.843823  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:40:21.843902  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.854106  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.864400  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.874387  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:40:21.884560  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.895371  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.913811  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.924236  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:40:21.933153  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:40:21.933202  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:40:21.946248  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:40:21.955404  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:22.069005  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:40:22.157442  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:40:22.157509  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:40:22.162047  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:40:22.162100  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:40:22.165636  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:40:22.205156  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:40:22.205267  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.231913  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.261339  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:40:22.262679  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:40:22.263832  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:22.266556  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.266888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:22.266915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.267123  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:40:22.271259  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:22.283359  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:40:22.283542  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:22.283792  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.283835  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.298878  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1030 18:40:22.299305  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.299796  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.299822  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.300116  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.300325  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:40:22.301824  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:22.302129  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.302167  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.316968  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I1030 18:40:22.317445  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.317883  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.317906  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.318227  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.318396  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:22.318552  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.67
	I1030 18:40:22.318566  400041 certs.go:194] generating shared ca certs ...
	I1030 18:40:22.318581  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.318722  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:40:22.318763  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:40:22.318772  400041 certs.go:256] generating profile certs ...
	I1030 18:40:22.318861  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:40:22.318884  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801
	I1030 18:40:22.318898  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.254]
	I1030 18:40:22.389619  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 ...
	I1030 18:40:22.389649  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801: {Name:mk69c03eb6b5f0b4d0acc4a4891d260deacb4aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389835  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 ...
	I1030 18:40:22.389853  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801: {Name:mkc4587720139321b37dc723905edfa912a066e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389954  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:40:22.390078  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:40:22.390209  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:40:22.390226  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:40:22.390240  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:40:22.390253  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:40:22.390265  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:40:22.390276  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:40:22.390291  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:40:22.390303  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:40:22.390314  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:40:22.390363  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:40:22.390392  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:40:22.390401  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:40:22.390423  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:40:22.390447  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:40:22.390467  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:40:22.390526  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:22.390551  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:22.390567  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.390579  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.390609  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:22.393533  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.393916  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:22.393937  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.394139  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:22.394328  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:22.394468  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:22.394599  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:22.466820  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:40:22.472172  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:40:22.483413  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:40:22.487802  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:40:22.498142  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:40:22.502005  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:40:22.511789  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:40:22.516194  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:40:22.526092  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:40:22.530300  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:40:22.539761  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:40:22.543659  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:40:22.554032  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:40:22.579429  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:40:22.603366  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:40:22.627011  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:40:22.649824  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1030 18:40:22.675859  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 18:40:22.702878  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:40:22.729191  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:40:22.755783  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:40:22.781937  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:40:22.806557  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:40:22.829559  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:40:22.845492  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:40:22.861140  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:40:22.877798  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:40:22.894364  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:40:22.910766  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:40:22.926975  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:40:22.944058  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:40:22.949888  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:40:22.960383  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964756  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964810  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.970419  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:40:22.980880  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:40:22.991033  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995374  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995440  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:40:23.000879  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:40:23.011335  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:40:23.021800  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026327  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026385  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.032188  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:40:23.042278  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:40:23.046274  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:40:23.046324  400041 kubeadm.go:934] updating node {m02 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1030 18:40:23.046424  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:40:23.046460  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:40:23.046517  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:40:23.063163  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:40:23.063236  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:40:23.063297  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.072465  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:40:23.072510  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.081550  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:40:23.081576  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.081589  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1030 18:40:23.081602  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1030 18:40:23.081619  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.085961  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:40:23.085992  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:40:24.328288  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.328373  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.333326  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:40:24.333359  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:40:24.830276  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:40:24.845774  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.845893  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.850314  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:40:24.850355  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:40:25.162230  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:40:25.172064  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1030 18:40:25.188645  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:40:25.204815  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:40:25.221977  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:40:25.225934  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:25.237891  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:25.349561  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:25.366698  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:25.367180  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:25.367246  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:25.384828  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I1030 18:40:25.385432  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:25.386031  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:25.386061  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:25.386434  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:25.386621  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:25.386806  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:40:25.386959  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:40:25.386986  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:25.389976  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390481  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:25.390522  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390674  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:25.390889  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:25.391033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:25.391170  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:25.547459  400041 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:25.547519  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443"
	I1030 18:40:46.568187  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443": (21.020635274s)
	I1030 18:40:46.568229  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:40:47.028345  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m02 minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:40:47.150726  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:40:47.264922  400041 start.go:319] duration metric: took 21.878113098s to joinCluster
	I1030 18:40:47.265016  400041 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:47.265346  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:47.267451  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:40:47.268676  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:47.482830  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:47.498911  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:40:47.499271  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:40:47.499361  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:40:47.499634  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:40:47.499754  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:47.499765  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:47.499776  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:47.499780  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:47.513589  400041 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1030 18:40:48.000627  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.000717  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.000732  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.000739  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.005027  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:48.500527  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.500553  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.500562  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.500566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.507486  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:40:48.999957  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.999981  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.999992  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.999998  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.004072  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:49.500009  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:49.500034  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:49.500044  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:49.500049  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.503688  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:49.504299  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:50.000762  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.000787  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.000798  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.000804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.004710  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.500222  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.500249  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.500261  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.500268  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.503800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.999915  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.999941  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.999949  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.999953  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.003089  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:51.500241  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:51.500270  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:51.500282  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:51.500288  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.503181  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:52.000665  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.000687  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.000696  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.000701  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.004020  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:52.004537  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:52.500784  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.500807  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.500815  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.500820  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.503534  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:53.000339  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.000361  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.000372  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.000377  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.003704  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:53.500343  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.500365  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.500373  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.500378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.503510  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.000354  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.000381  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.000395  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.000403  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.004115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.004763  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:54.500127  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.500152  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.500161  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.500166  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.503778  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.000747  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.000778  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.000791  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.000797  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.004570  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.500357  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.500405  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.500415  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.500420  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.504113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:56.000848  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.000872  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.000890  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.000895  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.005204  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:56.006300  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:56.500116  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.500139  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.500149  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.500156  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.503736  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.000020  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.000047  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.000059  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.000064  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.003517  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.500475  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.500507  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.500519  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.500528  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.504454  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.999844  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.999871  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.999880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.999884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.003233  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:58.500239  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:58.500265  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:58.500275  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:58.500280  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.503241  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:58.504056  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:59.000302  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.000325  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.000335  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.000338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.003378  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.500257  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.500293  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.500305  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.500311  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.503678  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.999943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.999974  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.999984  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.999988  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.003694  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.499870  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:00.499894  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:00.499903  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:00.499906  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.503912  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.504852  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:01.000256  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.000287  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.000303  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.000310  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.004687  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:01.500249  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.500275  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.500286  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.500292  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.503725  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.000125  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.000149  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.000159  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.000163  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.003110  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:02.500738  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.500764  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.500774  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.500779  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.504318  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.504919  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:03.000323  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.000348  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.000361  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.000369  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.003869  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:03.500542  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.500568  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.500579  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.500585  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.503602  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:04.000594  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.000622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.000633  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.000639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.003714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.500712  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.500736  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.500746  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.500752  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.503791  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.999910  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.999934  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.999943  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.999948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.003533  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:05.004088  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:05.500597  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:05.500622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:05.500630  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:05.500639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.503501  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:06.000616  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.000647  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.000659  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.000667  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.004719  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:06.500833  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.500855  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.500864  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.500868  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.504070  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.000429  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.000469  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.000481  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.000487  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.003689  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.004389  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:07.500634  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.500659  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.500670  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.500676  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.503714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.000797  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.000823  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.000835  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.000839  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.004162  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.500552  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.500576  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.500584  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.500588  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.503781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.504368  400041 node_ready.go:49] node "ha-174833-m02" has status "Ready":"True"
	I1030 18:41:08.504387  400041 node_ready.go:38] duration metric: took 21.004733688s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:41:08.504399  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:08.504510  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:08.504522  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.504533  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.504540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.508519  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.514243  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.514348  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:41:08.514359  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.514370  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.514375  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.517179  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.518000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.518014  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.518021  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.518026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.520277  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.520732  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.520749  400041 pod_ready.go:82] duration metric: took 6.484522ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520758  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520818  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:41:08.520826  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.520832  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.520837  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.523187  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.523748  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.523763  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.523770  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.523773  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.525598  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.526045  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.526061  400041 pod_ready.go:82] duration metric: took 5.296844ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526073  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:41:08.526137  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.526147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.526155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.528137  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.528632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.528646  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.528653  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.528656  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.530536  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.530970  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.530985  400041 pod_ready.go:82] duration metric: took 4.904104ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.530995  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.531044  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:41:08.531054  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.531063  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.531071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.532895  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.533572  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.533585  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.533592  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.533598  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.535476  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.535920  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.535936  400041 pod_ready.go:82] duration metric: took 4.934707ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.535947  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.701353  400041 request.go:632] Waited for 165.322436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701427  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701434  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.701445  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.701455  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.704722  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.900709  400041 request.go:632] Waited for 195.283762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900771  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900777  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.900787  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.900793  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.903675  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.904204  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.904224  400041 pod_ready.go:82] duration metric: took 368.270404ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.904235  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.101325  400041 request.go:632] Waited for 196.99596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101392  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101397  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.101406  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.101414  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.104943  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.301209  400041 request.go:632] Waited for 195.378832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301280  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301286  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.301294  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.301299  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.304703  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.305150  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.305171  400041 pod_ready.go:82] duration metric: took 400.929601ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.305183  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.501368  400041 request.go:632] Waited for 196.079315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501455  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501468  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.501478  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.501486  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.505228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.701240  400041 request.go:632] Waited for 195.369784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701322  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.701331  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.701334  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.703994  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:09.704752  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.704770  400041 pod_ready.go:82] duration metric: took 399.581191ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.704781  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.900901  400041 request.go:632] Waited for 196.026591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900964  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900969  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.900978  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.900983  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.904074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.101112  400041 request.go:632] Waited for 196.368613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101194  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101205  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.101214  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.101226  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.104324  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.104744  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.104763  400041 pod_ready.go:82] duration metric: took 399.976925ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.104774  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.300860  400041 request.go:632] Waited for 196.007769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300949  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.300957  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.300968  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.304042  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.501291  400041 request.go:632] Waited for 196.406771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501358  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501363  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.501372  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.501378  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.504471  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.504946  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.504966  400041 pod_ready.go:82] duration metric: took 400.186291ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.504985  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.701128  400041 request.go:632] Waited for 196.042962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701198  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701203  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.701211  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.701218  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.704595  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.900756  400041 request.go:632] Waited for 195.290492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900855  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900861  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.900869  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.900878  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.904332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.904829  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.904849  400041 pod_ready.go:82] duration metric: took 399.858433ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.904860  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.101047  400041 request.go:632] Waited for 196.091867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101112  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101117  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.101125  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.101130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.104800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.300654  400041 request.go:632] Waited for 195.298322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300720  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300731  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.300740  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.300743  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.304342  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.304796  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.304815  400041 pod_ready.go:82] duration metric: took 399.947891ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.304826  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.500975  400041 request.go:632] Waited for 196.04993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501040  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501045  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.501052  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.501057  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.504438  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.701379  400041 request.go:632] Waited for 196.340488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701443  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701449  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.701457  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.701462  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.704386  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:11.704831  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.704850  400041 pod_ready.go:82] duration metric: took 400.015715ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.704863  400041 pod_ready.go:39] duration metric: took 3.200450336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:11.704882  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:41:11.704944  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:41:11.723542  400041 api_server.go:72] duration metric: took 24.458488953s to wait for apiserver process to appear ...
	I1030 18:41:11.723564  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:41:11.723583  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:41:11.729129  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:41:11.729191  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:41:11.729199  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.729206  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.729213  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.729902  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:41:11.729987  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:41:11.730004  400041 api_server.go:131] duration metric: took 6.434971ms to wait for apiserver health ...
	I1030 18:41:11.730015  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:41:11.901454  400041 request.go:632] Waited for 171.341792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901536  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901542  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.901550  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.901554  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.906457  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:11.911360  400041 system_pods.go:59] 17 kube-system pods found
	I1030 18:41:11.911389  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:11.911396  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:11.911402  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:11.911408  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:11.911413  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:11.911418  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:11.911424  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:11.911432  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:11.911437  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:11.911440  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:11.911444  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:11.911447  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:11.911452  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:11.911458  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:11.911461  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:11.911464  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:11.911467  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:11.911474  400041 system_pods.go:74] duration metric: took 181.449525ms to wait for pod list to return data ...
	I1030 18:41:11.911484  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:41:12.100968  400041 request.go:632] Waited for 189.365167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101038  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.101046  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.101054  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.104878  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:12.105115  400041 default_sa.go:45] found service account: "default"
	I1030 18:41:12.105131  400041 default_sa.go:55] duration metric: took 193.641266ms for default service account to be created ...
	I1030 18:41:12.105141  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:41:12.301355  400041 request.go:632] Waited for 196.109942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301420  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301425  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.301433  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.301438  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.306382  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.311406  400041 system_pods.go:86] 17 kube-system pods found
	I1030 18:41:12.311437  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:12.311446  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:12.311454  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:12.311460  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:12.311465  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:12.311471  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:12.311477  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:12.311486  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:12.311492  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:12.311502  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:12.311509  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:12.311517  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:12.311525  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:12.311531  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:12.311540  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:12.311546  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:12.311554  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:12.311563  400041 system_pods.go:126] duration metric: took 206.414957ms to wait for k8s-apps to be running ...
	I1030 18:41:12.311574  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:41:12.311636  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:12.327021  400041 system_svc.go:56] duration metric: took 15.42192ms WaitForService to wait for kubelet
	I1030 18:41:12.327057  400041 kubeadm.go:582] duration metric: took 25.062007913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:41:12.327076  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:41:12.501567  400041 request.go:632] Waited for 174.380598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501638  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.501647  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.501651  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.505969  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.506702  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506731  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506744  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506747  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506751  400041 node_conditions.go:105] duration metric: took 179.67107ms to run NodePressure ...
	I1030 18:41:12.506763  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:41:12.506788  400041 start.go:255] writing updated cluster config ...
	I1030 18:41:12.509015  400041 out.go:201] 
	I1030 18:41:12.510595  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:12.510702  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.512413  400041 out.go:177] * Starting "ha-174833-m03" control-plane node in "ha-174833" cluster
	I1030 18:41:12.513538  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:41:12.513560  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:41:12.513661  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:41:12.513676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:41:12.513774  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.513991  400041 start.go:360] acquireMachinesLock for ha-174833-m03: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:41:12.514046  400041 start.go:364] duration metric: took 32.901µs to acquireMachinesLock for "ha-174833-m03"
	I1030 18:41:12.514072  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:12.514208  400041 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1030 18:41:12.515720  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:41:12.515810  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:12.515845  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:12.531298  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I1030 18:41:12.531779  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:12.532302  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:12.532328  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:12.532695  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:12.532932  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:12.533094  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:12.533248  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:41:12.533281  400041 client.go:168] LocalClient.Create starting
	I1030 18:41:12.533344  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:41:12.533389  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533410  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533483  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:41:12.533512  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533529  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533556  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:41:12.533582  400041 main.go:141] libmachine: (ha-174833-m03) Calling .PreCreateCheck
	I1030 18:41:12.533754  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:12.534141  400041 main.go:141] libmachine: Creating machine...
	I1030 18:41:12.534155  400041 main.go:141] libmachine: (ha-174833-m03) Calling .Create
	I1030 18:41:12.534316  400041 main.go:141] libmachine: (ha-174833-m03) Creating KVM machine...
	I1030 18:41:12.535469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing default KVM network
	I1030 18:41:12.535689  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing private KVM network mk-ha-174833
	I1030 18:41:12.535839  400041 main.go:141] libmachine: (ha-174833-m03) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.535890  400041 main.go:141] libmachine: (ha-174833-m03) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:41:12.535946  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.535806  400817 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.536022  400041 main.go:141] libmachine: (ha-174833-m03) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:41:12.821754  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.821614  400817 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa...
	I1030 18:41:12.940970  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940841  400817 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk...
	I1030 18:41:12.941002  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing magic tar header
	I1030 18:41:12.941016  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing SSH key tar header
	I1030 18:41:12.941027  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940965  400817 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.941045  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03
	I1030 18:41:12.941128  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 (perms=drwx------)
	I1030 18:41:12.941149  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:41:12.941160  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:41:12.941183  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:41:12.941197  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:41:12.941212  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:41:12.941227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.941239  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:41:12.941248  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:41:12.941259  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:12.941276  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:41:12.941291  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:41:12.941301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home
	I1030 18:41:12.941315  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Skipping /home - not owner
	I1030 18:41:12.942234  400041 main.go:141] libmachine: (ha-174833-m03) define libvirt domain using xml: 
	I1030 18:41:12.942260  400041 main.go:141] libmachine: (ha-174833-m03) <domain type='kvm'>
	I1030 18:41:12.942270  400041 main.go:141] libmachine: (ha-174833-m03)   <name>ha-174833-m03</name>
	I1030 18:41:12.942277  400041 main.go:141] libmachine: (ha-174833-m03)   <memory unit='MiB'>2200</memory>
	I1030 18:41:12.942286  400041 main.go:141] libmachine: (ha-174833-m03)   <vcpu>2</vcpu>
	I1030 18:41:12.942296  400041 main.go:141] libmachine: (ha-174833-m03)   <features>
	I1030 18:41:12.942305  400041 main.go:141] libmachine: (ha-174833-m03)     <acpi/>
	I1030 18:41:12.942315  400041 main.go:141] libmachine: (ha-174833-m03)     <apic/>
	I1030 18:41:12.942326  400041 main.go:141] libmachine: (ha-174833-m03)     <pae/>
	I1030 18:41:12.942335  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942346  400041 main.go:141] libmachine: (ha-174833-m03)   </features>
	I1030 18:41:12.942353  400041 main.go:141] libmachine: (ha-174833-m03)   <cpu mode='host-passthrough'>
	I1030 18:41:12.942387  400041 main.go:141] libmachine: (ha-174833-m03)   
	I1030 18:41:12.942411  400041 main.go:141] libmachine: (ha-174833-m03)   </cpu>
	I1030 18:41:12.942424  400041 main.go:141] libmachine: (ha-174833-m03)   <os>
	I1030 18:41:12.942433  400041 main.go:141] libmachine: (ha-174833-m03)     <type>hvm</type>
	I1030 18:41:12.942446  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='cdrom'/>
	I1030 18:41:12.942456  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='hd'/>
	I1030 18:41:12.942469  400041 main.go:141] libmachine: (ha-174833-m03)     <bootmenu enable='no'/>
	I1030 18:41:12.942502  400041 main.go:141] libmachine: (ha-174833-m03)   </os>
	I1030 18:41:12.942521  400041 main.go:141] libmachine: (ha-174833-m03)   <devices>
	I1030 18:41:12.942532  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='cdrom'>
	I1030 18:41:12.942543  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/boot2docker.iso'/>
	I1030 18:41:12.942552  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hdc' bus='scsi'/>
	I1030 18:41:12.942561  400041 main.go:141] libmachine: (ha-174833-m03)       <readonly/>
	I1030 18:41:12.942566  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942574  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='disk'>
	I1030 18:41:12.942581  400041 main.go:141] libmachine: (ha-174833-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:41:12.942587  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk'/>
	I1030 18:41:12.942606  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hda' bus='virtio'/>
	I1030 18:41:12.942619  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942627  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942635  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='mk-ha-174833'/>
	I1030 18:41:12.942648  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942658  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942670  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942697  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='default'/>
	I1030 18:41:12.942736  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942764  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942779  400041 main.go:141] libmachine: (ha-174833-m03)     <serial type='pty'>
	I1030 18:41:12.942790  400041 main.go:141] libmachine: (ha-174833-m03)       <target port='0'/>
	I1030 18:41:12.942802  400041 main.go:141] libmachine: (ha-174833-m03)     </serial>
	I1030 18:41:12.942812  400041 main.go:141] libmachine: (ha-174833-m03)     <console type='pty'>
	I1030 18:41:12.942823  400041 main.go:141] libmachine: (ha-174833-m03)       <target type='serial' port='0'/>
	I1030 18:41:12.942832  400041 main.go:141] libmachine: (ha-174833-m03)     </console>
	I1030 18:41:12.942841  400041 main.go:141] libmachine: (ha-174833-m03)     <rng model='virtio'>
	I1030 18:41:12.942852  400041 main.go:141] libmachine: (ha-174833-m03)       <backend model='random'>/dev/random</backend>
	I1030 18:41:12.942885  400041 main.go:141] libmachine: (ha-174833-m03)     </rng>
	I1030 18:41:12.942907  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942929  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942938  400041 main.go:141] libmachine: (ha-174833-m03)   </devices>
	I1030 18:41:12.942946  400041 main.go:141] libmachine: (ha-174833-m03) </domain>
	I1030 18:41:12.942957  400041 main.go:141] libmachine: (ha-174833-m03) 
	I1030 18:41:12.949898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:1a:b3:c5 in network default
	I1030 18:41:12.950445  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring networks are active...
	I1030 18:41:12.950469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:12.951138  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network default is active
	I1030 18:41:12.951462  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network mk-ha-174833 is active
	I1030 18:41:12.951841  400041 main.go:141] libmachine: (ha-174833-m03) Getting domain xml...
	I1030 18:41:12.952538  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:14.179359  400041 main.go:141] libmachine: (ha-174833-m03) Waiting to get IP...
	I1030 18:41:14.180307  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.180744  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.180812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.180741  400817 retry.go:31] will retry after 293.822494ms: waiting for machine to come up
	I1030 18:41:14.476270  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.476758  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.476784  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.476703  400817 retry.go:31] will retry after 283.345671ms: waiting for machine to come up
	I1030 18:41:14.761301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.761803  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.761833  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.761750  400817 retry.go:31] will retry after 299.766753ms: waiting for machine to come up
	I1030 18:41:15.063146  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.063613  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.063642  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.063557  400817 retry.go:31] will retry after 490.461635ms: waiting for machine to come up
	I1030 18:41:15.557014  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.557549  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.557577  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.557492  400817 retry.go:31] will retry after 739.117277ms: waiting for machine to come up
	I1030 18:41:16.298461  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.298926  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.298956  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.298870  400817 retry.go:31] will retry after 666.546188ms: waiting for machine to come up
	I1030 18:41:16.966687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.967172  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.967200  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.967117  400817 retry.go:31] will retry after 846.088379ms: waiting for machine to come up
	I1030 18:41:17.814898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:17.815410  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:17.815440  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:17.815362  400817 retry.go:31] will retry after 1.085711576s: waiting for machine to come up
	I1030 18:41:18.902574  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:18.902922  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:18.902952  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:18.902876  400817 retry.go:31] will retry after 1.834126575s: waiting for machine to come up
	I1030 18:41:20.739528  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:20.739890  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:20.739919  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:20.739850  400817 retry.go:31] will retry after 2.105862328s: waiting for machine to come up
	I1030 18:41:22.847426  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:22.847835  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:22.847867  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:22.847766  400817 retry.go:31] will retry after 2.441796021s: waiting for machine to come up
	I1030 18:41:25.291422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:25.291864  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:25.291888  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:25.291812  400817 retry.go:31] will retry after 2.18908754s: waiting for machine to come up
	I1030 18:41:27.484272  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:27.484720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:27.484740  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:27.484674  400817 retry.go:31] will retry after 3.249594938s: waiting for machine to come up
	I1030 18:41:30.735386  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:30.735687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:30.735711  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:30.735669  400817 retry.go:31] will retry after 5.542117345s: waiting for machine to come up
	I1030 18:41:36.279557  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.279987  400041 main.go:141] libmachine: (ha-174833-m03) Found IP for machine: 192.168.39.238
	I1030 18:41:36.280005  400041 main.go:141] libmachine: (ha-174833-m03) Reserving static IP address...
	I1030 18:41:36.280019  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.280379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "ha-174833-m03", mac: "52:54:00:76:9d:ad", ip: "192.168.39.238"} in network mk-ha-174833
	I1030 18:41:36.353555  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:36.353581  400041 main.go:141] libmachine: (ha-174833-m03) Reserved static IP address: 192.168.39.238
	I1030 18:41:36.353628  400041 main.go:141] libmachine: (ha-174833-m03) Waiting for SSH to be available...
	I1030 18:41:36.356187  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.356543  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833
	I1030 18:41:36.356569  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find defined IP address of network mk-ha-174833 interface with MAC address 52:54:00:76:9d:ad
	I1030 18:41:36.356719  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:36.356745  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:36.356795  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:36.356814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:36.356847  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:36.360778  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: exit status 255: 
	I1030 18:41:36.360804  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1030 18:41:36.360814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | command : exit 0
	I1030 18:41:36.360821  400041 main.go:141] libmachine: (ha-174833-m03) DBG | err     : exit status 255
	I1030 18:41:36.360832  400041 main.go:141] libmachine: (ha-174833-m03) DBG | output  : 
	I1030 18:41:39.361300  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:39.363671  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364021  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.364051  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364131  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:39.364170  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:39.364209  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:39.364227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:39.364236  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:39.498991  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: <nil>: 
	I1030 18:41:39.499302  400041 main.go:141] libmachine: (ha-174833-m03) KVM machine creation complete!
	I1030 18:41:39.499653  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:39.500359  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500567  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500834  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:41:39.500852  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetState
	I1030 18:41:39.502063  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:41:39.502076  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:41:39.502081  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:41:39.502086  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.504584  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.504838  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.504860  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.505021  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.505207  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505493  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.505642  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.505855  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.505867  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:41:39.613705  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.613730  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:41:39.613737  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.616442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616787  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.616812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616966  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.617171  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617381  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617494  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.617635  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.617821  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.617831  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:41:39.731009  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:41:39.731096  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:41:39.731110  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:41:39.731120  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731355  400041 buildroot.go:166] provisioning hostname "ha-174833-m03"
	I1030 18:41:39.731385  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731563  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.734727  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735195  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.735225  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735395  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.735599  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735773  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735975  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.736185  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.736419  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.736443  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m03 && echo "ha-174833-m03" | sudo tee /etc/hostname
	I1030 18:41:39.865251  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m03
	
	I1030 18:41:39.865295  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.868277  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868776  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.868811  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868979  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.869210  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869426  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869574  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.869780  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.870007  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.870023  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:41:39.993047  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.993077  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:41:39.993099  400041 buildroot.go:174] setting up certificates
	I1030 18:41:39.993114  400041 provision.go:84] configureAuth start
	I1030 18:41:39.993127  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.993439  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:39.996433  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.996840  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.996869  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.997060  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.000005  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.000450  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000565  400041 provision.go:143] copyHostCerts
	I1030 18:41:40.000594  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000629  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:41:40.000638  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000698  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:41:40.000806  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000825  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:41:40.000831  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000854  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:41:40.000910  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000926  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:41:40.000932  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000953  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:41:40.001003  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m03 san=[127.0.0.1 192.168.39.238 ha-174833-m03 localhost minikube]
	I1030 18:41:40.389110  400041 provision.go:177] copyRemoteCerts
	I1030 18:41:40.389174  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:41:40.389201  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.391720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392157  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.392191  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392466  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.392672  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.392854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.393003  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.485464  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:41:40.485543  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:41:40.513241  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:41:40.513314  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:41:40.537145  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:41:40.537239  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:41:40.562099  400041 provision.go:87] duration metric: took 568.966283ms to configureAuth
	I1030 18:41:40.562136  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:41:40.562357  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:40.562450  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.565158  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565531  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.565563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565700  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.565906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566083  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566192  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.566349  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.566539  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.566554  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:41:40.803791  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:41:40.803826  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:41:40.803835  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetURL
	I1030 18:41:40.805073  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using libvirt version 6000000
	I1030 18:41:40.807111  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.807592  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807738  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:41:40.807756  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:41:40.807765  400041 client.go:171] duration metric: took 28.27447273s to LocalClient.Create
	I1030 18:41:40.807794  400041 start.go:167] duration metric: took 28.274545509s to libmachine.API.Create "ha-174833"
	I1030 18:41:40.807813  400041 start.go:293] postStartSetup for "ha-174833-m03" (driver="kvm2")
	I1030 18:41:40.807829  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:41:40.807854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:40.808083  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:41:40.808112  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.810446  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810781  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.810810  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810951  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.811117  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.811251  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.811374  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.898250  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:41:40.902639  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:41:40.902670  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:41:40.902762  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:41:40.902838  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:41:40.902848  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:41:40.902930  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:41:40.911988  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:40.936666  400041 start.go:296] duration metric: took 128.83333ms for postStartSetup
	I1030 18:41:40.936732  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:40.937356  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:40.939940  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.940406  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940740  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:40.940959  400041 start.go:128] duration metric: took 28.426739922s to createHost
	I1030 18:41:40.940996  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.943340  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943659  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.943683  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943787  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.943992  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944157  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944299  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.944469  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.944647  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.944657  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:41:41.054995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313701.035748365
	
	I1030 18:41:41.055025  400041 fix.go:216] guest clock: 1730313701.035748365
	I1030 18:41:41.055036  400041 fix.go:229] Guest: 2024-10-30 18:41:41.035748365 +0000 UTC Remote: 2024-10-30 18:41:40.940974319 +0000 UTC m=+147.695761890 (delta=94.774046ms)
	I1030 18:41:41.055058  400041 fix.go:200] guest clock delta is within tolerance: 94.774046ms
	I1030 18:41:41.055065  400041 start.go:83] releasing machines lock for "ha-174833-m03", held for 28.541005951s
	I1030 18:41:41.055090  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.055377  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:41.057920  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.058257  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.058278  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.060653  400041 out.go:177] * Found network options:
	I1030 18:41:41.062139  400041 out.go:177]   - NO_PROXY=192.168.39.141,192.168.39.67
	W1030 18:41:41.063472  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.063496  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.063508  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064009  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064221  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064313  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:41:41.064352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	W1030 18:41:41.064451  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.064473  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.064552  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:41:41.064575  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:41.066853  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067199  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067222  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067302  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067479  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067664  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.067724  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067749  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067830  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.067906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067978  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.068065  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.068181  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.068275  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.314636  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:41:41.321102  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:41:41.321173  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:41:41.338442  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:41:41.338470  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:41:41.338554  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:41:41.355526  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:41:41.369752  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:41:41.369824  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:41:41.384658  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:41:41.399117  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:41:41.515988  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:41:41.659854  400041 docker.go:233] disabling docker service ...
	I1030 18:41:41.659940  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:41:41.675386  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:41:41.688521  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:41:41.830998  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:41:41.962743  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:41:41.976734  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:41:41.998554  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:41:41.998635  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.010835  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:41:42.010904  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.022771  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.033993  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.044518  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:41:42.055581  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.065838  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.082685  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.092911  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:41:42.102341  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:41:42.102398  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:41:42.115321  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:41:42.125073  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:42.255762  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:41:42.348340  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:41:42.348402  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:41:42.353645  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:41:42.353700  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:41:42.357362  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:41:42.403194  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:41:42.403278  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.433073  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.461144  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:41:42.462700  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:41:42.464361  400041 out.go:177]   - env NO_PROXY=192.168.39.141,192.168.39.67
	I1030 18:41:42.465724  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:42.468442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.468785  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:42.468812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.469009  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:41:42.473316  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:42.486401  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:41:42.486671  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:42.487004  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.487051  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.503315  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1030 18:41:42.503812  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.504381  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.504403  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.504715  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.504885  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:41:42.506310  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:42.506684  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.506729  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.521795  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I1030 18:41:42.522246  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.522834  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.522857  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.523225  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.523429  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:42.523593  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.238
	I1030 18:41:42.523605  400041 certs.go:194] generating shared ca certs ...
	I1030 18:41:42.523621  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.523781  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:41:42.523832  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:41:42.523846  400041 certs.go:256] generating profile certs ...
	I1030 18:41:42.523984  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:41:42.524022  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7
	I1030 18:41:42.524044  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.238 192.168.39.254]
	I1030 18:41:42.771082  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 ...
	I1030 18:41:42.771143  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7: {Name:mkbb8ab8bf6c18d6d6a31970e3b828800b8fd44f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771350  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 ...
	I1030 18:41:42.771369  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7: {Name:mk93a1175526096093ebe70ea08ba926787709bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771474  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:41:42.771640  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:41:42.771819  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:41:42.771839  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:41:42.771859  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:41:42.771878  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:41:42.771897  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:41:42.771916  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:41:42.771935  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:41:42.771953  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:41:42.786601  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:41:42.786716  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:41:42.786768  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:41:42.786783  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:41:42.786818  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:41:42.786855  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:41:42.786886  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:41:42.786944  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:42.786987  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:41:42.787011  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:42.787031  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:41:42.787082  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:42.790022  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790433  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:42.790463  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790635  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:42.790863  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:42.791005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:42.791117  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:42.862993  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:41:42.869116  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:41:42.881084  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:41:42.885608  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:41:42.896066  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:41:42.900395  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:41:42.911415  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:41:42.915680  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:41:42.926002  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:41:42.929978  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:41:42.939948  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:41:42.944073  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:41:42.954991  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:41:42.979919  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:41:43.004284  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:41:43.027671  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:41:43.050807  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1030 18:41:43.073405  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:41:43.097875  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:41:43.121491  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:41:43.145484  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:41:43.169567  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:41:43.194113  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:41:43.217839  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:41:43.235214  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:41:43.251678  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:41:43.267891  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:41:43.283793  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:41:43.301477  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:41:43.319112  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:41:43.336222  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:41:43.342021  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:41:43.353281  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357881  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357947  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.363573  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:41:43.375497  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:41:43.389049  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393551  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393616  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.399295  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:41:43.411090  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:41:43.422010  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426629  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426687  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.432334  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:41:43.443256  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:41:43.447278  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:41:43.447336  400041 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.2 crio true true} ...
	I1030 18:41:43.447423  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:41:43.447453  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:41:43.447481  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:41:43.463867  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:41:43.463938  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:41:43.463993  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.474999  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:41:43.475044  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.485456  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:41:43.485479  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1030 18:41:43.485533  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485545  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1030 18:41:43.485603  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485621  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:43.504131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504186  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:41:43.504223  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:41:43.504237  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:41:43.504267  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:41:43.522121  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:41:43.522169  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:41:44.375482  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:41:44.387138  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1030 18:41:44.405486  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:41:44.422728  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:41:44.439060  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:41:44.443074  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:44.455364  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:44.570256  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:41:44.588522  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:44.589080  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:44.589146  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:44.605625  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 18:41:44.606088  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:44.606626  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:44.606648  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:44.607023  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:44.607225  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:44.607369  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:41:44.607505  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:41:44.607526  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:44.610554  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611109  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:44.611135  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611433  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:44.611606  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:44.611760  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:44.611885  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:44.773784  400041 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:44.773850  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443"
	I1030 18:42:06.433926  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443": (21.660034767s)
	I1030 18:42:06.433968  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:42:06.995847  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m03 minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:42:07.135527  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:42:07.266435  400041 start.go:319] duration metric: took 22.659060991s to joinCluster
	I1030 18:42:07.266542  400041 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:42:07.266874  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:42:07.267989  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:42:07.269832  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:42:07.538532  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:42:07.566640  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:42:07.566990  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:42:07.567153  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:42:07.567517  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:07.567636  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:07.567647  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:07.567658  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:07.567663  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:07.571044  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.067840  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.067866  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.067875  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.067880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.071548  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.568423  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.568445  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.568456  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.568468  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.572275  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:09.068213  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.068244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.068255  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.068261  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.072412  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.568601  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.568687  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.568704  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.572953  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.573669  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:10.068646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.068674  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.068686  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.068690  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.072592  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:10.568186  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.568212  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.568228  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.568234  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.571345  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:11.068394  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.068419  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.068430  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.068435  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.071353  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:11.568540  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.568569  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.568581  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.568586  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.571615  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.068128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.068184  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.068198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.068204  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.072054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.072920  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:12.568764  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.568788  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.568799  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.568804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.572509  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:13.067810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.067840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.067852  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.067858  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.072370  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:13.568096  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.568118  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.568127  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.568130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.571713  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.068692  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.068715  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.068724  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.068728  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.072113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.073045  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:14.568414  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.568441  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.568458  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.568463  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.571979  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:15.067728  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.067752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.067760  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.067764  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.079108  400041 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1030 18:42:15.568483  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.568509  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.568518  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.568523  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.571981  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.067933  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.067953  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.067962  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.067965  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.071179  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.568646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.568671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.568684  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.568691  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.571923  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.572720  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:17.068520  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.068545  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.068561  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.068566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.072118  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:17.568073  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.568108  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.568118  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.568123  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.571265  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.068409  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.068434  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.068442  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.068447  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.071717  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.568497  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.568527  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.568540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.568546  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.571867  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.067827  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.067850  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.067859  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.067863  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.070951  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.071706  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:19.568087  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.568110  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.568119  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.568122  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.571495  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.068028  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.068053  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.068064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.068071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.071582  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.568136  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.568161  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.568169  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.568174  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.571551  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.068612  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.068640  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.068652  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.068657  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.072026  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.072659  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:21.568033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.568055  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.568064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.568069  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.571332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.067937  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.067961  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.067970  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.067976  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.071718  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.568117  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.568139  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.568147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.568155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.571493  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.068511  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.068548  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.068558  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.068562  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.071664  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.568675  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.568699  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.568707  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.571937  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.572572  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:24.067899  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.067922  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.067931  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.067934  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.071366  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:24.568317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.568342  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.568351  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.568355  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.571501  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.067773  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.067796  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.067803  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.067806  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.071344  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.568753  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.568775  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.568783  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.568787  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.572126  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.572899  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:26.068223  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.068246  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.068257  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.068262  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.072464  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:26.073313  400041 node_ready.go:49] node "ha-174833-m03" has status "Ready":"True"
	I1030 18:42:26.073333  400041 node_ready.go:38] duration metric: took 18.505796326s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:26.073343  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:26.073412  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:26.073421  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.073428  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.073435  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.079519  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:26.085610  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.085695  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:42:26.085704  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.085711  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.085715  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.088406  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.089109  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.089127  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.089137  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.089143  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.091504  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.092047  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.092069  400041 pod_ready.go:82] duration metric: took 6.435195ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092082  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092150  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:42:26.092160  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.092170  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.092179  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.095058  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.095704  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.095720  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.095730  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.095735  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.098085  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.098596  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.098614  400041 pod_ready.go:82] duration metric: took 6.524633ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098625  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098689  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:42:26.098701  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.098708  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.098714  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.101151  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.101737  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.101752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.101762  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.101769  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.103823  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.104381  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.104404  400041 pod_ready.go:82] duration metric: took 5.771643ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104417  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104487  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:42:26.104498  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.104507  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.104515  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.106840  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.107295  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:26.107308  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.107318  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.107325  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.109492  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.109917  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.109932  400041 pod_ready.go:82] duration metric: took 5.508285ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.109947  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.268296  400041 request.go:632] Waited for 158.281409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268393  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268404  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.268413  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.268419  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.272054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.469115  400041 request.go:632] Waited for 196.339916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469175  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469180  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.469190  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.469198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.472781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.473415  400041 pod_ready.go:93] pod "etcd-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.473441  400041 pod_ready.go:82] duration metric: took 363.484662ms for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.473458  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.668901  400041 request.go:632] Waited for 195.3359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669014  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.669026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.669034  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.672627  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.868738  400041 request.go:632] Waited for 195.360312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868832  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.868851  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.868860  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.872228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.872778  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.872812  400041 pod_ready.go:82] duration metric: took 399.338189ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.872828  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.068798  400041 request.go:632] Waited for 195.855457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068879  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068887  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.068898  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.068909  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.072321  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.269235  400041 request.go:632] Waited for 196.216042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269319  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.269343  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.269353  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.272769  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.273439  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.273459  400041 pod_ready.go:82] duration metric: took 400.623063ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.273469  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.468256  400041 request.go:632] Waited for 194.693367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468325  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.468338  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.468347  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.471734  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.669102  400041 request.go:632] Waited for 196.461533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669185  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669197  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.669208  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.669216  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.672818  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.673832  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.673854  400041 pod_ready.go:82] duration metric: took 400.378216ms for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.673876  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.868940  400041 request.go:632] Waited for 194.958773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869030  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869042  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.869053  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.869060  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.872180  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.068264  400041 request.go:632] Waited for 195.290526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068332  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068351  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.068362  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.068370  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.071658  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.072242  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.072265  400041 pod_ready.go:82] duration metric: took 398.381976ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.072276  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.268211  400041 request.go:632] Waited for 195.804533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268292  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268300  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.268311  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.268318  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.271496  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.468870  400041 request.go:632] Waited for 196.361357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468956  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468962  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.468977  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.468987  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.472341  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.472906  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.472925  400041 pod_ready.go:82] duration metric: took 400.642779ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.472940  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.669072  400041 request.go:632] Waited for 196.028852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669156  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669168  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.669179  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.669191  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.673097  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.868210  400041 request.go:632] Waited for 194.307626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868287  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868295  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.868307  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.868338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.871679  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.872327  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.872352  400041 pod_ready.go:82] duration metric: took 399.404321ms for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.872369  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.068267  400041 request.go:632] Waited for 195.816492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068356  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068367  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.068376  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.068388  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.072060  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.269102  400041 request.go:632] Waited for 196.354313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269167  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269172  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.269181  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.269186  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.273078  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.273532  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.273551  400041 pod_ready.go:82] duration metric: took 401.170636ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.273567  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.468616  400041 request.go:632] Waited for 194.925869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468712  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.468722  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.468730  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.472234  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.669266  400041 request.go:632] Waited for 196.242195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669324  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669331  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.669341  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.669348  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.673010  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.674076  400041 pod_ready.go:93] pod "kube-proxy-g7l7z" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.674097  400041 pod_ready.go:82] duration metric: took 400.523192ms for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.674108  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.869286  400041 request.go:632] Waited for 195.064443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869374  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869384  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.869393  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.869397  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.872765  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.068849  400041 request.go:632] Waited for 195.380036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068912  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068917  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.068926  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.068930  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.073076  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:30.073910  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.073931  400041 pod_ready.go:82] duration metric: took 399.816887ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.073942  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.269092  400041 request.go:632] Waited for 195.075688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269158  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269163  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.269171  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.269174  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.272728  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.468827  400041 request.go:632] Waited for 195.469933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468924  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468935  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.468944  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.468948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.472792  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.473256  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.473274  400041 pod_ready.go:82] duration metric: took 399.325616ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.473285  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.668281  400041 request.go:632] Waited for 194.899722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668360  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668369  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.668378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.668386  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.672074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.869270  400041 request.go:632] Waited for 196.355231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869340  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869345  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.869354  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.869361  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.873235  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.873666  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.873686  400041 pod_ready.go:82] duration metric: took 400.39483ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.873697  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.068802  400041 request.go:632] Waited for 195.002943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068869  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068875  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.068884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.068901  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.072579  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.268662  400041 request.go:632] Waited for 195.353177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268730  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268736  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.268743  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.268749  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.272045  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.272702  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:31.272721  400041 pod_ready.go:82] duration metric: took 399.01745ms for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.272733  400041 pod_ready.go:39] duration metric: took 5.199380679s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:31.272749  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:42:31.272802  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:42:31.290132  400041 api_server.go:72] duration metric: took 24.023548522s to wait for apiserver process to appear ...
	I1030 18:42:31.290159  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:42:31.290180  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:42:31.295173  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:42:31.295236  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:42:31.295244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.295252  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.295257  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.296242  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:42:31.296313  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:42:31.296329  400041 api_server.go:131] duration metric: took 6.164986ms to wait for apiserver health ...
	I1030 18:42:31.296336  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:42:31.468748  400041 request.go:632] Waited for 172.312716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468815  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.468822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.468826  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.475257  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:31.481661  400041 system_pods.go:59] 24 kube-system pods found
	I1030 18:42:31.481688  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.481693  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.481699  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.481705  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.481710  400041 system_pods.go:61] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.481715  400041 system_pods.go:61] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.481720  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.481728  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.481733  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.481740  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.481749  400041 system_pods.go:61] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.481754  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.481762  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.481768  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.481776  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.481781  400041 system_pods.go:61] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.481789  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.481794  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.481802  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.481807  400041 system_pods.go:61] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.481814  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.481819  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.481826  400041 system_pods.go:61] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.481832  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.481843  400041 system_pods.go:74] duration metric: took 185.498428ms to wait for pod list to return data ...
	I1030 18:42:31.481856  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:42:31.668606  400041 request.go:632] Waited for 186.6491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668666  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.668679  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.668682  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.672056  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.672194  400041 default_sa.go:45] found service account: "default"
	I1030 18:42:31.672209  400041 default_sa.go:55] duration metric: took 190.344386ms for default service account to be created ...
	I1030 18:42:31.672218  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:42:31.868735  400041 request.go:632] Waited for 196.405115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868808  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868814  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.868822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.868830  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.874347  400041 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 18:42:31.881436  400041 system_pods.go:86] 24 kube-system pods found
	I1030 18:42:31.881470  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.881477  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.881483  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.881487  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.881490  400041 system_pods.go:89] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.881496  400041 system_pods.go:89] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.881501  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.881507  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.881516  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.881521  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.881529  400041 system_pods.go:89] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.881538  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.881547  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.881551  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.881555  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.881559  400041 system_pods.go:89] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.881563  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.881568  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.881574  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.881580  400041 system_pods.go:89] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.881585  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.881589  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.881595  400041 system_pods.go:89] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.881600  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.881612  400041 system_pods.go:126] duration metric: took 209.387873ms to wait for k8s-apps to be running ...
	I1030 18:42:31.881626  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:42:31.881679  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:42:31.897108  400041 system_svc.go:56] duration metric: took 15.46981ms WaitForService to wait for kubelet
	I1030 18:42:31.897150  400041 kubeadm.go:582] duration metric: took 24.630565695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:42:31.897179  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:42:32.068632  400041 request.go:632] Waited for 171.354733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068708  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:32.068716  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:32.068721  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:32.073422  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:32.074348  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074387  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074400  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074404  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074408  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074412  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074421  400041 node_conditions.go:105] duration metric: took 177.235852ms to run NodePressure ...
	I1030 18:42:32.074439  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:42:32.074466  400041 start.go:255] writing updated cluster config ...
	I1030 18:42:32.074805  400041 ssh_runner.go:195] Run: rm -f paused
	I1030 18:42:32.127386  400041 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 18:42:32.129289  400041 out.go:177] * Done! kubectl is now configured to use "ha-174833" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.807899858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313976807877211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61a2a6d6-2a43-4829-9009-9d6c68f41fa4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.808553479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=053d3d24-706a-446a-8820-310845cb0664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.808608487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=053d3d24-706a-446a-8820-310845cb0664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.808827181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=053d3d24-706a-446a-8820-310845cb0664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.852560484Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63740b57-4df3-4cc7-95fb-313d88ee5801 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.852657212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63740b57-4df3-4cc7-95fb-313d88ee5801 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.854412301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ed0e991-385f-4762-a027-1d6d1e3618a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.854837456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313976854811123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ed0e991-385f-4762-a027-1d6d1e3618a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.855436394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be26a263-cfde-4107-ada4-a657bed2357b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.855502120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be26a263-cfde-4107-ada4-a657bed2357b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.855703964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be26a263-cfde-4107-ada4-a657bed2357b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.898588408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a261f18-f084-412a-8bb9-e5ddd2727b22 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.898660325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a261f18-f084-412a-8bb9-e5ddd2727b22 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.900065030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6567d3f2-6ce2-414b-8ccf-98cbd50504d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.900729283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313976900704597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6567d3f2-6ce2-414b-8ccf-98cbd50504d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.901356648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=738aa772-1fa9-4e57-8b1b-33779b58273e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.901434930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=738aa772-1fa9-4e57-8b1b-33779b58273e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.901746456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=738aa772-1fa9-4e57-8b1b-33779b58273e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.939587641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dadec65b-5fc7-467a-8953-dc2f5312982c name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.939679905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dadec65b-5fc7-467a-8953-dc2f5312982c name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.940907393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1515274b-9af3-4ad8-8cc9-a59026a49cad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.941606236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313976941579488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1515274b-9af3-4ad8-8cc9-a59026a49cad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.942266149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=939206c9-5b57-42ca-8812-3a7e8de53417 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.942325322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=939206c9-5b57-42ca-8812-3a7e8de53417 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:16 ha-174833 crio[664]: time="2024-10-30 18:46:16.942527267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=939206c9-5b57-42ca-8812-3a7e8de53417 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b50f8293a0eac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   4b32508187fed       coredns-7c65d6cfc9-tnj67
	b6694cd6bc9e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     6 minutes ago       Running             storage-provisioner       0                   e4daca50f6e1c       storage-provisioner
	80919506252b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   80f0d2bac7bdb       coredns-7c65d6cfc9-qrkkc
	46301d1401a14       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16   6 minutes ago       Running             kindnet-cni               0                   4a4a82673e78f       kindnet-pm48g
	634060e657ba2       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                     6 minutes ago       Running             kube-proxy                0                   5d414abeb9a8e       kube-proxy-2qt2n
	da8b9126272c4       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215    6 minutes ago       Running             kube-vip                  0                   635aa65f78ff8       kube-vip-ha-174833
	6f0fb508f1f86       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                     6 minutes ago       Running             kube-scheduler            0                   2a80897d4d698       kube-scheduler-ha-174833
	db863ebdc17e0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                     6 minutes ago       Running             kube-controller-manager   0                   bc13396acc704       kube-controller-manager-ha-174833
	381be95e92ca6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     6 minutes ago       Running             etcd                      0                   aa574b692710d       etcd-ha-174833
	661ed7108dbf5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                     6 minutes ago       Running             kube-apiserver            0                   a4e686c5a4e05       kube-apiserver-ha-174833
	
	
	==> coredns [80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f] <==
	[INFO] 10.244.2.2:49872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260615s
	[INFO] 10.244.2.2:45985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000215389s
	[INFO] 10.244.1.3:58699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184263s
	[INFO] 10.244.1.3:36745 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223993s
	[INFO] 10.244.1.3:52696 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197445s
	[INFO] 10.244.1.3:51136 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008496656s
	[INFO] 10.244.1.3:37326 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170193s
	[INFO] 10.244.2.2:41356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001504514s
	[INFO] 10.244.2.2:58448 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121598s
	[INFO] 10.244.2.2:57683 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115706s
	[INFO] 10.244.1.2:44356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773314s
	[INFO] 10.244.1.2:53338 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092182s
	[INFO] 10.244.1.2:36505 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123936s
	[INFO] 10.244.1.2:50770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129391s
	[INFO] 10.244.1.3:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119608s
	[INFO] 10.244.1.3:38056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104793s
	[INFO] 10.244.2.2:56050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001014289s
	[INFO] 10.244.2.2:46354 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094957s
	[INFO] 10.244.1.2:43247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140652s
	[INFO] 10.244.1.3:59260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286102s
	[INFO] 10.244.1.3:42613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177355s
	[INFO] 10.244.2.2:38778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139553s
	[INFO] 10.244.2.2:55445 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162449s
	[INFO] 10.244.1.2:49123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103971s
	[INFO] 10.244.1.2:36025 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103655s
	
	
	==> coredns [b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009] <==
	[INFO] 10.244.1.3:35936 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006730126s
	[INFO] 10.244.1.3:52049 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164529s
	[INFO] 10.244.1.3:41429 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145894s
	[INFO] 10.244.2.2:38865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015631s
	[INFO] 10.244.2.2:35468 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001359248s
	[INFO] 10.244.2.2:39539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154504s
	[INFO] 10.244.2.2:40996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012336s
	[INFO] 10.244.2.2:36394 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103847s
	[INFO] 10.244.1.2:36748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157155s
	[INFO] 10.244.1.2:57168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183772s
	[INFO] 10.244.1.2:44765 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001208743s
	[INFO] 10.244.1.2:51648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094986s
	[INFO] 10.244.1.3:35468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117052s
	[INFO] 10.244.1.3:41666 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093918s
	[INFO] 10.244.2.2:40566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179128s
	[INFO] 10.244.2.2:35306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086624s
	[INFO] 10.244.1.2:54037 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136664s
	[INFO] 10.244.1.2:39370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109182s
	[INFO] 10.244.1.2:41814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123818s
	[INFO] 10.244.1.3:44728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170139s
	[INFO] 10.244.1.3:56805 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142203s
	[INFO] 10.244.2.2:36863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187523s
	[INFO] 10.244.2.2:41661 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120093s
	[INFO] 10.244.1.2:52634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137066s
	[INFO] 10.244.1.2:35418 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120994s
	
	
	==> describe nodes <==
	Name:               ha-174833
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:40:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    ha-174833
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ccc5c9f42c54438b6652723644bbeef
	  System UUID:                7ccc5c9f-42c5-4438-b665-2723644bbeef
	  Boot ID:                    83dbe7e6-9d54-44c7-aa42-e17dc8d9a1a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-qrkkc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-tnj67             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-174833                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-pm48g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-174833             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-ha-174833    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-2qt2n                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-174833             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-174833                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m19s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m32s (x7 over 6m32s)  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m32s (x8 over 6m32s)  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s (x8 over 6m32s)  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s                  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s                  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s                  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  NodeReady                6m3s                   kubelet          Node ha-174833 status is now: NodeReady
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	
	
	Name:               ha-174833-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:40:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:43:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-174833-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44df5dbbd2d444bb8a426278602ee677
	  System UUID:                44df5dbb-d2d4-44bb-8a42-6278602ee677
	  Boot ID:                    360af464-681d-4348-b7f8-dd08e7d88924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mm586                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  default                     busybox-7dff88458-v6kn9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-174833-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-rlzbn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-174833-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-174833-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-proxy-hg2st                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-174833-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-174833-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-174833-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-174833-m02 status is now: NodeNotReady
	
	
	Name:               ha-174833-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:42:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-174833-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a25aeed7bbc4bd4a357771ce914b28b
	  System UUID:                8a25aeed-7bbc-4bd4-a357-771ce914b28b
	  Boot ID:                    3552b03e-4535-4240-8adc-99b111c48f7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rzbbm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-174833-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-b76pd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m14s
	  kube-system                 kube-apiserver-ha-174833-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-controller-manager-ha-174833-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-g7l7z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-ha-174833-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-vip-ha-174833-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node ha-174833-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m14s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	
	
	Name:               ha-174833-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_43_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:43:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-174833-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 65b27c1ce02d45b78ed3fcddd1aae236
	  System UUID:                65b27c1c-e02d-45b7-8ed3-fcddd1aae236
	  Boot ID:                    25699951-947c-4e74-aa23-b7f7f9d75023
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2dhq5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-nzl42    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m58s                kube-proxy       
	  Normal  CIDRAssignmentFailed     3m3s                 cidrAllocator    Node ha-174833-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-174833-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-174833-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct30 18:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050141] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040202] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.508080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580074] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.619811] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059036] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050086] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.189200] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.106863] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.256172] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.944359] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.089078] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.056939] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.232740] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.917340] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +5.757118] kauditd_printk_skb: 23 callbacks suppressed
	[Oct30 18:40] kauditd_printk_skb: 32 callbacks suppressed
	[ +47.325044] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c] <==
	{"level":"warn","ts":"2024-10-30T18:46:16.818688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:16.872781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:16.972082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.052457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.072417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.172024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.208833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.216709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.219877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.256183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.262095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.267942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.272305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.274558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.277416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.280590Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.285927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.292595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.298862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.302073Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.305712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.310084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.327493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.335266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:17.373604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:46:17 up 7 min,  0 users,  load average: 0.23, 0.36, 0.20
	Linux ha-174833 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef] <==
	I1030 18:45:44.313971       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:45:54.322170       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:45:54.322244       1 main.go:301] handling current node
	I1030 18:45:54.322259       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:45:54.322265       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:45:54.322528       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:45:54.322552       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:45:54.322662       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:45:54.322683       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:04.313396       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:04.313498       1 main.go:301] handling current node
	I1030 18:46:04.313526       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:04.313545       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:04.313781       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:04.313810       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:04.313989       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:04.314019       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:14.313413       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:14.313476       1 main.go:301] handling current node
	I1030 18:46:14.313504       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:14.313513       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:14.313806       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:14.313832       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:14.314013       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:14.314036       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb] <==
	I1030 18:39:50.264612       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 18:39:50.401162       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1030 18:39:50.407669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.141]
	I1030 18:39:50.408487       1 controller.go:615] quota admission added evaluator for: endpoints
	I1030 18:39:50.417171       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 18:39:50.434785       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1030 18:39:51.992504       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1030 18:39:52.038007       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1030 18:39:52.050097       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1030 18:39:55.887886       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1030 18:39:56.039666       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1030 18:42:42.298130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41446: use of closed network connection
	E1030 18:42:42.500141       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41460: use of closed network connection
	E1030 18:42:42.681190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41478: use of closed network connection
	E1030 18:42:42.876163       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41496: use of closed network connection
	E1030 18:42:43.053880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41524: use of closed network connection
	E1030 18:42:43.422726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41570: use of closed network connection
	E1030 18:42:43.605703       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41578: use of closed network connection
	E1030 18:42:43.785641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41594: use of closed network connection
	E1030 18:42:44.079143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41622: use of closed network connection
	E1030 18:42:44.278108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41630: use of closed network connection
	E1030 18:42:44.464009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41654: use of closed network connection
	E1030 18:42:44.647039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41670: use of closed network connection
	E1030 18:42:44.825565       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41686: use of closed network connection
	E1030 18:42:45.007583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41704: use of closed network connection
	
	
	==> kube-controller-manager [db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73] <==
	I1030 18:43:14.768963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:14.886660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.225099       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174833-m04"
	I1030 18:43:15.270413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.350905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.242429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.306242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.754966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.845608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:24.906507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.742819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.743714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:43:35.758129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:37.268796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:45.220918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:44:30.252088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.252535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:44:30.280327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.294546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.947854ms"
	I1030 18:44:30.294861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.928µs"
	I1030 18:44:30.441730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.437828ms"
	I1030 18:44:30.442971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="183.461µs"
	I1030 18:44:32.399995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:35.500584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:45:28.632096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833"
	
	
	==> kube-proxy [634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 18:39:57.657528       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 18:39:57.672099       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1030 18:39:57.672270       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 18:39:57.707431       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 18:39:57.707476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 18:39:57.707498       1 server_linux.go:169] "Using iptables Proxier"
	I1030 18:39:57.710062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 18:39:57.710384       1 server.go:483] "Version info" version="v1.31.2"
	I1030 18:39:57.710412       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 18:39:57.711719       1 config.go:199] "Starting service config controller"
	I1030 18:39:57.711756       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 18:39:57.711783       1 config.go:105] "Starting endpoint slice config controller"
	I1030 18:39:57.711787       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 18:39:57.712612       1 config.go:328] "Starting node config controller"
	I1030 18:39:57.712701       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 18:39:57.812186       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 18:39:57.812427       1 shared_informer.go:320] Caches are synced for service config
	I1030 18:39:57.813054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6] <==
	W1030 18:39:49.816172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 18:39:49.816268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.949917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 18:39:49.949971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.991072       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 18:39:49.991150       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1030 18:39:52.691806       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1030 18:42:33.022088       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mm586" node="ha-174833-m03"
	E1030 18:42:33.022366       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" pod="default/busybox-7dff88458-mm586"
	E1030 18:43:14.801891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.807808       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3291acf1-7798-4998-95fd-5094835e017f(kube-system/kube-proxy-nzl42) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nzl42"
	E1030 18:43:14.807930       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-nzl42"
	I1030 18:43:14.809848       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.810858       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.814494       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3144d47c-0cef-414b-b657-6a3c10ada751(kube-system/kindnet-ptwbp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ptwbp"
	E1030 18:43:14.814760       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-ptwbp"
	I1030 18:43:14.814869       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.859158       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.859832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51293c2a-e424-4d2b-a692-1d8df3e4eb88(kube-system/kube-proxy-vp4bf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vp4bf"
	E1030 18:43:14.860153       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-vp4bf"
	I1030 18:43:14.860458       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.864834       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	E1030 18:43:14.866342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3cf9c20d-84c1-4bd6-8f34-453bee8cc673(kube-system/kindnet-dsxh6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dsxh6"
	E1030 18:43:14.866529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-dsxh6"
	I1030 18:43:14.866552       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	
	
	==> kubelet <==
	Oct 30 18:44:51 ha-174833 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 18:44:51 ha-174833 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 18:44:52 ha-174833 kubelet[1302]: E1030 18:44:52.044104    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313892043714010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:44:52 ha-174833 kubelet[1302]: E1030 18:44:52.044143    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313892043714010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047183    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047499    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.048946    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.049303    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050794    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050834    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053552    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053658    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.055784    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.056077    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:51 ha-174833 kubelet[1302]: E1030 18:45:51.922951    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058449    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058518    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060855    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060895    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062294    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062632    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174833 -n ha-174833
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174833 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.386904367s)
ha_test.go:415: expected profile "ha-174833" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-174833\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-174833\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-174833\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.141\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.67\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.238\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.123\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevi
rt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\
",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174833 -n ha-174833
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 logs -n 25: (1.3769s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m03_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m04 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp testdata/cp-test.txt                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m04_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03:/home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m03 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174833 node stop m02 -v=7                                                     | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:39:13
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:39:13.284465  400041 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:39:13.284583  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284591  400041 out.go:358] Setting ErrFile to fd 2...
	I1030 18:39:13.284596  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284767  400041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:39:13.285341  400041 out.go:352] Setting JSON to false
	I1030 18:39:13.286279  400041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8496,"bootTime":1730305057,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:39:13.286383  400041 start.go:139] virtualization: kvm guest
	I1030 18:39:13.288640  400041 out.go:177] * [ha-174833] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:39:13.290653  400041 notify.go:220] Checking for updates...
	I1030 18:39:13.290717  400041 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:39:13.292349  400041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:39:13.293858  400041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:13.295309  400041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.296710  400041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:39:13.298107  400041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:39:13.299548  400041 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:39:13.333903  400041 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 18:39:13.335174  400041 start.go:297] selected driver: kvm2
	I1030 18:39:13.335194  400041 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:39:13.335206  400041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:39:13.335896  400041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.336007  400041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:39:13.350868  400041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:39:13.350946  400041 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:39:13.351232  400041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:39:13.351271  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:13.351324  400041 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1030 18:39:13.351332  400041 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 18:39:13.351398  400041 start.go:340] cluster config:
	{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:13.351547  400041 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.353340  400041 out.go:177] * Starting "ha-174833" primary control-plane node in "ha-174833" cluster
	I1030 18:39:13.354531  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:13.354568  400041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:39:13.354580  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:13.354663  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:13.354676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:13.355016  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:13.355043  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json: {Name:mkc5b46cd8e85bcdd2d75c56d8807d384c7babe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:13.355179  400041 start.go:360] acquireMachinesLock for ha-174833: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:13.355220  400041 start.go:364] duration metric: took 25.55µs to acquireMachinesLock for "ha-174833"
	I1030 18:39:13.355242  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:13.355302  400041 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 18:39:13.356866  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:13.357003  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:13.357058  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:13.371132  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I1030 18:39:13.371590  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:13.372159  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:13.372180  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:13.372504  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:13.372689  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:13.372808  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:13.372956  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:13.372989  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:13.373021  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:13.373056  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373078  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373144  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:13.373168  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373183  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373207  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:13.373219  400041 main.go:141] libmachine: (ha-174833) Calling .PreCreateCheck
	I1030 18:39:13.373569  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:13.373996  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:13.374012  400041 main.go:141] libmachine: (ha-174833) Calling .Create
	I1030 18:39:13.374145  400041 main.go:141] libmachine: (ha-174833) Creating KVM machine...
	I1030 18:39:13.375320  400041 main.go:141] libmachine: (ha-174833) DBG | found existing default KVM network
	I1030 18:39:13.375998  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.375838  400064 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1030 18:39:13.376021  400041 main.go:141] libmachine: (ha-174833) DBG | created network xml: 
	I1030 18:39:13.376034  400041 main.go:141] libmachine: (ha-174833) DBG | <network>
	I1030 18:39:13.376048  400041 main.go:141] libmachine: (ha-174833) DBG |   <name>mk-ha-174833</name>
	I1030 18:39:13.376057  400041 main.go:141] libmachine: (ha-174833) DBG |   <dns enable='no'/>
	I1030 18:39:13.376066  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376076  400041 main.go:141] libmachine: (ha-174833) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1030 18:39:13.376085  400041 main.go:141] libmachine: (ha-174833) DBG |     <dhcp>
	I1030 18:39:13.376097  400041 main.go:141] libmachine: (ha-174833) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1030 18:39:13.376112  400041 main.go:141] libmachine: (ha-174833) DBG |     </dhcp>
	I1030 18:39:13.376121  400041 main.go:141] libmachine: (ha-174833) DBG |   </ip>
	I1030 18:39:13.376134  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376145  400041 main.go:141] libmachine: (ha-174833) DBG | </network>
	I1030 18:39:13.376153  400041 main.go:141] libmachine: (ha-174833) DBG | 
	I1030 18:39:13.380994  400041 main.go:141] libmachine: (ha-174833) DBG | trying to create private KVM network mk-ha-174833 192.168.39.0/24...
	I1030 18:39:13.444397  400041 main.go:141] libmachine: (ha-174833) DBG | private KVM network mk-ha-174833 192.168.39.0/24 created
	I1030 18:39:13.444439  400041 main.go:141] libmachine: (ha-174833) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.444454  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.444367  400064 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.444474  400041 main.go:141] libmachine: (ha-174833) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:13.444565  400041 main.go:141] libmachine: (ha-174833) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:13.725521  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.725350  400064 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa...
	I1030 18:39:13.832228  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832066  400064 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk...
	I1030 18:39:13.832262  400041 main.go:141] libmachine: (ha-174833) DBG | Writing magic tar header
	I1030 18:39:13.832279  400041 main.go:141] libmachine: (ha-174833) DBG | Writing SSH key tar header
	I1030 18:39:13.832291  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832203  400064 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.832302  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833
	I1030 18:39:13.832373  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 (perms=drwx------)
	I1030 18:39:13.832401  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:13.832414  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:13.832431  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.832442  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:13.832452  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:13.832462  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:13.832473  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:13.832490  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:13.832506  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home
	I1030 18:39:13.832517  400041 main.go:141] libmachine: (ha-174833) DBG | Skipping /home - not owner
	I1030 18:39:13.832528  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:13.832538  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:13.832550  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:13.833717  400041 main.go:141] libmachine: (ha-174833) define libvirt domain using xml: 
	I1030 18:39:13.833738  400041 main.go:141] libmachine: (ha-174833) <domain type='kvm'>
	I1030 18:39:13.833744  400041 main.go:141] libmachine: (ha-174833)   <name>ha-174833</name>
	I1030 18:39:13.833752  400041 main.go:141] libmachine: (ha-174833)   <memory unit='MiB'>2200</memory>
	I1030 18:39:13.833758  400041 main.go:141] libmachine: (ha-174833)   <vcpu>2</vcpu>
	I1030 18:39:13.833762  400041 main.go:141] libmachine: (ha-174833)   <features>
	I1030 18:39:13.833766  400041 main.go:141] libmachine: (ha-174833)     <acpi/>
	I1030 18:39:13.833770  400041 main.go:141] libmachine: (ha-174833)     <apic/>
	I1030 18:39:13.833774  400041 main.go:141] libmachine: (ha-174833)     <pae/>
	I1030 18:39:13.833794  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.833807  400041 main.go:141] libmachine: (ha-174833)   </features>
	I1030 18:39:13.833814  400041 main.go:141] libmachine: (ha-174833)   <cpu mode='host-passthrough'>
	I1030 18:39:13.833838  400041 main.go:141] libmachine: (ha-174833)   
	I1030 18:39:13.833857  400041 main.go:141] libmachine: (ha-174833)   </cpu>
	I1030 18:39:13.833863  400041 main.go:141] libmachine: (ha-174833)   <os>
	I1030 18:39:13.833868  400041 main.go:141] libmachine: (ha-174833)     <type>hvm</type>
	I1030 18:39:13.833884  400041 main.go:141] libmachine: (ha-174833)     <boot dev='cdrom'/>
	I1030 18:39:13.833888  400041 main.go:141] libmachine: (ha-174833)     <boot dev='hd'/>
	I1030 18:39:13.833904  400041 main.go:141] libmachine: (ha-174833)     <bootmenu enable='no'/>
	I1030 18:39:13.833912  400041 main.go:141] libmachine: (ha-174833)   </os>
	I1030 18:39:13.833917  400041 main.go:141] libmachine: (ha-174833)   <devices>
	I1030 18:39:13.833922  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='cdrom'>
	I1030 18:39:13.834007  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/boot2docker.iso'/>
	I1030 18:39:13.834043  400041 main.go:141] libmachine: (ha-174833)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:13.834066  400041 main.go:141] libmachine: (ha-174833)       <readonly/>
	I1030 18:39:13.834080  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834092  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='disk'>
	I1030 18:39:13.834107  400041 main.go:141] libmachine: (ha-174833)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:13.834134  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk'/>
	I1030 18:39:13.834146  400041 main.go:141] libmachine: (ha-174833)       <target dev='hda' bus='virtio'/>
	I1030 18:39:13.834163  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834179  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834191  400041 main.go:141] libmachine: (ha-174833)       <source network='mk-ha-174833'/>
	I1030 18:39:13.834199  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834204  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834213  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834219  400041 main.go:141] libmachine: (ha-174833)       <source network='default'/>
	I1030 18:39:13.834228  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834233  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834244  400041 main.go:141] libmachine: (ha-174833)     <serial type='pty'>
	I1030 18:39:13.834261  400041 main.go:141] libmachine: (ha-174833)       <target port='0'/>
	I1030 18:39:13.834275  400041 main.go:141] libmachine: (ha-174833)     </serial>
	I1030 18:39:13.834287  400041 main.go:141] libmachine: (ha-174833)     <console type='pty'>
	I1030 18:39:13.834295  400041 main.go:141] libmachine: (ha-174833)       <target type='serial' port='0'/>
	I1030 18:39:13.834310  400041 main.go:141] libmachine: (ha-174833)     </console>
	I1030 18:39:13.834320  400041 main.go:141] libmachine: (ha-174833)     <rng model='virtio'>
	I1030 18:39:13.834333  400041 main.go:141] libmachine: (ha-174833)       <backend model='random'>/dev/random</backend>
	I1030 18:39:13.834342  400041 main.go:141] libmachine: (ha-174833)     </rng>
	I1030 18:39:13.834351  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834359  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834368  400041 main.go:141] libmachine: (ha-174833)   </devices>
	I1030 18:39:13.834377  400041 main.go:141] libmachine: (ha-174833) </domain>
	I1030 18:39:13.834388  400041 main.go:141] libmachine: (ha-174833) 
	I1030 18:39:13.838852  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:67:40:5d in network default
	I1030 18:39:13.839421  400041 main.go:141] libmachine: (ha-174833) Ensuring networks are active...
	I1030 18:39:13.839441  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:13.840041  400041 main.go:141] libmachine: (ha-174833) Ensuring network default is active
	I1030 18:39:13.840342  400041 main.go:141] libmachine: (ha-174833) Ensuring network mk-ha-174833 is active
	I1030 18:39:13.840783  400041 main.go:141] libmachine: (ha-174833) Getting domain xml...
	I1030 18:39:13.841490  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:15.028258  400041 main.go:141] libmachine: (ha-174833) Waiting to get IP...
	I1030 18:39:15.029201  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.029564  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.029614  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.029561  400064 retry.go:31] will retry after 241.896456ms: waiting for machine to come up
	I1030 18:39:15.272995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.273461  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.273488  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.273413  400064 retry.go:31] will retry after 260.838664ms: waiting for machine to come up
	I1030 18:39:15.535845  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.536295  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.536316  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.536255  400064 retry.go:31] will retry after 479.733534ms: waiting for machine to come up
	I1030 18:39:16.017897  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.018269  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.018294  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.018228  400064 retry.go:31] will retry after 392.371571ms: waiting for machine to come up
	I1030 18:39:16.412626  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.413050  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.413080  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.412991  400064 retry.go:31] will retry after 692.689396ms: waiting for machine to come up
	I1030 18:39:17.106954  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.107478  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.107955  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.107422  400064 retry.go:31] will retry after 832.987361ms: waiting for machine to come up
	I1030 18:39:17.942300  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.942709  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.942756  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.942670  400064 retry.go:31] will retry after 1.191938703s: waiting for machine to come up
	I1030 18:39:19.135752  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:19.136105  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:19.136132  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:19.136082  400064 retry.go:31] will retry after 978.475739ms: waiting for machine to come up
	I1030 18:39:20.116239  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:20.116734  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:20.116762  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:20.116673  400064 retry.go:31] will retry after 1.671512667s: waiting for machine to come up
	I1030 18:39:21.790628  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:21.791129  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:21.791157  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:21.791069  400064 retry.go:31] will retry after 2.145808112s: waiting for machine to come up
	I1030 18:39:23.938308  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:23.938724  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:23.938750  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:23.938677  400064 retry.go:31] will retry after 2.206607406s: waiting for machine to come up
	I1030 18:39:26.148104  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:26.148464  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:26.148498  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:26.148437  400064 retry.go:31] will retry after 3.57155807s: waiting for machine to come up
	I1030 18:39:29.721895  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:29.722283  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:29.722306  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:29.722235  400064 retry.go:31] will retry after 4.087469223s: waiting for machine to come up
	I1030 18:39:33.811039  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811489  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has current primary IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811515  400041 main.go:141] libmachine: (ha-174833) Found IP for machine: 192.168.39.141
	I1030 18:39:33.811524  400041 main.go:141] libmachine: (ha-174833) Reserving static IP address...
	I1030 18:39:33.811821  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find host DHCP lease matching {name: "ha-174833", mac: "52:54:00:fd:5e:ca", ip: "192.168.39.141"} in network mk-ha-174833
	I1030 18:39:33.884143  400041 main.go:141] libmachine: (ha-174833) Reserved static IP address: 192.168.39.141
	I1030 18:39:33.884173  400041 main.go:141] libmachine: (ha-174833) DBG | Getting to WaitForSSH function...
	I1030 18:39:33.884180  400041 main.go:141] libmachine: (ha-174833) Waiting for SSH to be available...
	I1030 18:39:33.886594  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.886971  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:33.886995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.887140  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH client type: external
	I1030 18:39:33.887229  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa (-rw-------)
	I1030 18:39:33.887264  400041 main.go:141] libmachine: (ha-174833) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:39:33.887276  400041 main.go:141] libmachine: (ha-174833) DBG | About to run SSH command:
	I1030 18:39:33.887284  400041 main.go:141] libmachine: (ha-174833) DBG | exit 0
	I1030 18:39:34.010284  400041 main.go:141] libmachine: (ha-174833) DBG | SSH cmd err, output: <nil>: 
	I1030 18:39:34.010612  400041 main.go:141] libmachine: (ha-174833) KVM machine creation complete!
	I1030 18:39:34.010940  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:34.011543  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011721  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011891  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:39:34.011905  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:34.013168  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:39:34.013181  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:39:34.013186  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:39:34.013192  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.015485  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015842  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.015874  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015997  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.016168  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016323  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016452  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.016738  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.016961  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.016974  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:39:34.117708  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.117732  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:39:34.117739  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.120384  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120816  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.120860  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120990  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.121177  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121322  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121422  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.121534  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.121721  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.121734  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:39:34.222936  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:39:34.223027  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:39:34.223040  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:39:34.223052  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223321  400041 buildroot.go:166] provisioning hostname "ha-174833"
	I1030 18:39:34.223356  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223546  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.225998  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226300  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.226323  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226503  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.226662  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226803  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226914  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.227040  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.227266  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.227279  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833 && echo "ha-174833" | sudo tee /etc/hostname
	I1030 18:39:34.340995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:39:34.341029  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.343841  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344138  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.344167  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344368  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.344558  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344679  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344790  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.344900  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.345070  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.345090  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:39:34.455073  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.455103  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:39:34.455126  400041 buildroot.go:174] setting up certificates
	I1030 18:39:34.455146  400041 provision.go:84] configureAuth start
	I1030 18:39:34.455156  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.455453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:34.458160  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458507  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.458546  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458737  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.461111  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461454  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.461482  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461548  400041 provision.go:143] copyHostCerts
	I1030 18:39:34.461581  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461633  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:39:34.461648  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461713  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:39:34.461793  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461811  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:39:34.461816  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461840  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:39:34.461880  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461896  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:39:34.461902  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461922  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:39:34.461968  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833 san=[127.0.0.1 192.168.39.141 ha-174833 localhost minikube]
	I1030 18:39:34.715502  400041 provision.go:177] copyRemoteCerts
	I1030 18:39:34.715567  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:39:34.715593  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.718337  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718724  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.718750  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.719124  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.719316  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.719438  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:34.802134  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:39:34.802247  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:39:34.830405  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:39:34.830495  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:39:34.853312  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:39:34.853400  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1030 18:39:34.876622  400041 provision.go:87] duration metric: took 421.460858ms to configureAuth
	I1030 18:39:34.876654  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:39:34.876860  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:34.876973  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.879465  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.879875  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.879918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.880033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.880249  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880401  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880547  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.880711  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.880901  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.880922  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:39:35.107739  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:39:35.107767  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:39:35.107789  400041 main.go:141] libmachine: (ha-174833) Calling .GetURL
	I1030 18:39:35.109044  400041 main.go:141] libmachine: (ha-174833) DBG | Using libvirt version 6000000
	I1030 18:39:35.111179  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111531  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.111555  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111678  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:39:35.111690  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:39:35.111698  400041 client.go:171] duration metric: took 21.738698891s to LocalClient.Create
	I1030 18:39:35.111719  400041 start.go:167] duration metric: took 21.738765345s to libmachine.API.Create "ha-174833"
	I1030 18:39:35.111730  400041 start.go:293] postStartSetup for "ha-174833" (driver="kvm2")
	I1030 18:39:35.111740  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:39:35.111756  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.111994  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:39:35.112024  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.114247  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114535  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.114564  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114645  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.114802  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.114905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.115037  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.197105  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:39:35.201419  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:39:35.201446  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:39:35.201521  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:39:35.201638  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:39:35.201653  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:39:35.201776  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:39:35.211530  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:35.234121  400041 start.go:296] duration metric: took 122.377861ms for postStartSetup
	I1030 18:39:35.234182  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:35.234814  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.237333  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237649  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.237675  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237930  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:35.238105  400041 start.go:128] duration metric: took 21.882791468s to createHost
	I1030 18:39:35.238129  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.240449  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240793  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.240819  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240925  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.241105  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241241  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241360  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.241504  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:35.241675  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:35.241684  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:39:35.343143  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313575.316321849
	
	I1030 18:39:35.343172  400041 fix.go:216] guest clock: 1730313575.316321849
	I1030 18:39:35.343179  400041 fix.go:229] Guest: 2024-10-30 18:39:35.316321849 +0000 UTC Remote: 2024-10-30 18:39:35.238116722 +0000 UTC m=+21.992904276 (delta=78.205127ms)
	I1030 18:39:35.343224  400041 fix.go:200] guest clock delta is within tolerance: 78.205127ms
	I1030 18:39:35.343236  400041 start.go:83] releasing machines lock for "ha-174833", held for 21.988006549s
	I1030 18:39:35.343264  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.343537  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.345918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346202  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.346227  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346384  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.346845  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347029  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347110  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:39:35.347154  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.347263  400041 ssh_runner.go:195] Run: cat /version.json
	I1030 18:39:35.347290  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.349953  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350154  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350349  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350372  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350476  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350518  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350532  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350712  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.350796  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350983  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.351121  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.351158  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351287  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.446752  400041 ssh_runner.go:195] Run: systemctl --version
	I1030 18:39:35.452799  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:39:35.607404  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:39:35.613689  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:39:35.613765  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:39:35.629322  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:39:35.629356  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:39:35.629426  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:39:35.645369  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:39:35.659484  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:39:35.659560  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:39:35.673617  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:39:35.686829  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:39:35.798982  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:39:35.961093  400041 docker.go:233] disabling docker service ...
	I1030 18:39:35.961203  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:39:35.975451  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:39:35.987814  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:39:36.096019  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:39:36.200364  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:39:36.213767  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:39:36.231649  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:39:36.231720  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.241504  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:39:36.241612  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.251200  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.260995  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.270677  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:39:36.280585  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.290337  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.306289  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.316095  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:39:36.325059  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:39:36.325116  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:39:36.338276  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:39:36.347428  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:36.458431  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:39:36.549399  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:39:36.549481  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:39:36.554177  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:39:36.554235  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:39:36.557819  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:39:36.597751  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:39:36.597863  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.625326  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.656926  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:39:36.658453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:36.661076  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661520  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:36.661551  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661753  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:39:36.665623  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:36.678283  400041 kubeadm.go:883] updating cluster {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:39:36.678415  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:36.678476  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:36.710390  400041 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 18:39:36.710476  400041 ssh_runner.go:195] Run: which lz4
	I1030 18:39:36.714335  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1030 18:39:36.714421  400041 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 18:39:36.718401  400041 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 18:39:36.718426  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 18:39:37.991420  400041 crio.go:462] duration metric: took 1.277020496s to copy over tarball
	I1030 18:39:37.991500  400041 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 18:39:40.058678  400041 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.067148582s)
	I1030 18:39:40.058707  400041 crio.go:469] duration metric: took 2.067258506s to extract the tarball
	I1030 18:39:40.058717  400041 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 18:39:40.095680  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:40.139024  400041 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:39:40.139051  400041 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:39:40.139060  400041 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.2 crio true true} ...
	I1030 18:39:40.139194  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:39:40.139268  400041 ssh_runner.go:195] Run: crio config
	I1030 18:39:40.182736  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:40.182762  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:40.182776  400041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:39:40.182809  400041 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174833 NodeName:ha-174833 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:39:40.182965  400041 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174833"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:39:40.182991  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:39:40.183041  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:39:40.198922  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:39:40.199067  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:39:40.199141  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:39:40.208739  400041 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:39:40.208814  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1030 18:39:40.217747  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1030 18:39:40.233431  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:39:40.249487  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1030 18:39:40.265703  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1030 18:39:40.282041  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:39:40.285892  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:40.297652  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:40.407338  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:39:40.424747  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.141
	I1030 18:39:40.424777  400041 certs.go:194] generating shared ca certs ...
	I1030 18:39:40.424817  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.425024  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:39:40.425082  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:39:40.425095  400041 certs.go:256] generating profile certs ...
	I1030 18:39:40.425172  400041 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:39:40.425193  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt with IP's: []
	I1030 18:39:40.472361  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt ...
	I1030 18:39:40.472390  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt: {Name:mkc5230ad33247edd4a8c72c6c48a87fa9cedd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472564  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key ...
	I1030 18:39:40.472575  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key: {Name:mk2476b29598bb2a9232a00c23240eb0f41fcc47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472659  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0
	I1030 18:39:40.472675  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.254]
	I1030 18:39:40.623668  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 ...
	I1030 18:39:40.623703  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0: {Name:mk527af1a36a41edb105de0ac73afcba6a07951e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623865  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 ...
	I1030 18:39:40.623878  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0: {Name:mk9d3db1edca5a6647774a57300dfc12ee759cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623943  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:39:40.624014  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:39:40.624064  400041 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:39:40.624080  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt with IP's: []
	I1030 18:39:40.681800  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt ...
	I1030 18:39:40.681833  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt: {Name:mke6c9a4a487817027f382c9db962d8a5023b692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.681991  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key ...
	I1030 18:39:40.682001  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key: {Name:mkcef517ac3b25f9738ab0dc212031ff215f0337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.682069  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:39:40.682086  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:39:40.682097  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:39:40.682118  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:39:40.682131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:39:40.682142  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:39:40.682154  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:39:40.682166  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:39:40.682213  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:39:40.682246  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:39:40.682256  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:39:40.682279  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:39:40.682301  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:39:40.682325  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:39:40.682365  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:40.682398  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.682412  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:40.682432  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:39:40.683028  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:39:40.708651  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:39:40.731313  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:39:40.753734  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:39:40.776131  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 18:39:40.799436  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:39:40.822746  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:39:40.845786  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:39:40.869789  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:39:40.893594  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:39:40.916381  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:39:40.939683  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:39:40.956310  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:39:40.962024  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:39:40.972261  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976598  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976650  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.982403  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:39:40.992755  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:39:41.003221  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007653  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007709  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.013218  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:39:41.023594  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:39:41.033911  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038607  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038673  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.044095  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:39:41.054143  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:39:41.058096  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:39:41.058161  400041 kubeadm.go:392] StartCluster: {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:41.058251  400041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:39:41.058301  400041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:39:41.095584  400041 cri.go:89] found id: ""
	I1030 18:39:41.095649  400041 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 18:39:41.105071  400041 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 18:39:41.114164  400041 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 18:39:41.122895  400041 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 18:39:41.122908  400041 kubeadm.go:157] found existing configuration files:
	
	I1030 18:39:41.122941  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 18:39:41.131529  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 18:39:41.131566  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 18:39:41.140275  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 18:39:41.148757  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 18:39:41.148813  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 18:39:41.160794  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.184302  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 18:39:41.184383  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.207263  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 18:39:41.228026  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 18:39:41.228102  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 18:39:41.237111  400041 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 18:39:41.445375  400041 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 18:39:52.585541  400041 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 18:39:52.585616  400041 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 18:39:52.585710  400041 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 18:39:52.585832  400041 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 18:39:52.585956  400041 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 18:39:52.586025  400041 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 18:39:52.587620  400041 out.go:235]   - Generating certificates and keys ...
	I1030 18:39:52.587688  400041 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 18:39:52.587761  400041 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 18:39:52.587836  400041 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 18:39:52.587896  400041 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 18:39:52.587987  400041 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 18:39:52.588061  400041 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 18:39:52.588139  400041 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 18:39:52.588270  400041 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588347  400041 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 18:39:52.588511  400041 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588616  400041 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 18:39:52.588707  400041 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 18:39:52.588773  400041 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 18:39:52.588839  400041 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 18:39:52.588887  400041 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 18:39:52.588932  400041 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 18:39:52.589004  400041 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 18:39:52.589094  400041 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 18:39:52.589146  400041 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 18:39:52.589229  400041 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 18:39:52.589332  400041 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 18:39:52.590758  400041 out.go:235]   - Booting up control plane ...
	I1030 18:39:52.590844  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 18:39:52.590916  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 18:39:52.590968  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 18:39:52.591065  400041 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 18:39:52.591191  400041 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 18:39:52.591253  400041 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 18:39:52.591410  400041 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 18:39:52.591536  400041 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 18:39:52.591616  400041 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003124871s
	I1030 18:39:52.591709  400041 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 18:39:52.591794  400041 kubeadm.go:310] [api-check] The API server is healthy after 5.662047877s
	I1030 18:39:52.591944  400041 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 18:39:52.592125  400041 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 18:39:52.592192  400041 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 18:39:52.592401  400041 kubeadm.go:310] [mark-control-plane] Marking the node ha-174833 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 18:39:52.592456  400041 kubeadm.go:310] [bootstrap-token] Using token: g2rz2p.8nzvncljb4xmvqws
	I1030 18:39:52.593760  400041 out.go:235]   - Configuring RBAC rules ...
	I1030 18:39:52.593856  400041 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 18:39:52.593940  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 18:39:52.594118  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 18:39:52.594304  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 18:39:52.594473  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 18:39:52.594624  400041 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 18:39:52.594785  400041 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 18:39:52.594849  400041 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 18:39:52.594921  400041 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 18:39:52.594940  400041 kubeadm.go:310] 
	I1030 18:39:52.594996  400041 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 18:39:52.595002  400041 kubeadm.go:310] 
	I1030 18:39:52.595066  400041 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 18:39:52.595072  400041 kubeadm.go:310] 
	I1030 18:39:52.595106  400041 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 18:39:52.595167  400041 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 18:39:52.595211  400041 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 18:39:52.595217  400041 kubeadm.go:310] 
	I1030 18:39:52.595262  400041 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 18:39:52.595268  400041 kubeadm.go:310] 
	I1030 18:39:52.595323  400041 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 18:39:52.595331  400041 kubeadm.go:310] 
	I1030 18:39:52.595374  400041 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 18:39:52.595436  400041 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 18:39:52.595501  400041 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 18:39:52.595508  400041 kubeadm.go:310] 
	I1030 18:39:52.595599  400041 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 18:39:52.595699  400041 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 18:39:52.595708  400041 kubeadm.go:310] 
	I1030 18:39:52.595831  400041 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.595945  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 18:39:52.595970  400041 kubeadm.go:310] 	--control-plane 
	I1030 18:39:52.595975  400041 kubeadm.go:310] 
	I1030 18:39:52.596043  400041 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 18:39:52.596049  400041 kubeadm.go:310] 
	I1030 18:39:52.596119  400041 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.596231  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 18:39:52.596243  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:52.596250  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:52.597696  400041 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1030 18:39:52.598955  400041 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 18:39:52.605469  400041 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1030 18:39:52.605483  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1030 18:39:52.624363  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 18:39:53.005173  400041 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833 minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=true
	I1030 18:39:53.173403  400041 ops.go:34] apiserver oom_adj: -16
	I1030 18:39:53.173409  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.674475  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.173792  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.673541  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.174225  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.674171  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.765485  400041 kubeadm.go:1113] duration metric: took 2.760286908s to wait for elevateKubeSystemPrivileges
	I1030 18:39:55.765536  400041 kubeadm.go:394] duration metric: took 14.707379512s to StartCluster
	I1030 18:39:55.765560  400041 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.765652  400041 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.766341  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.766618  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 18:39:55.766613  400041 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:55.766643  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:39:55.766652  400041 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 18:39:55.766742  400041 addons.go:69] Setting storage-provisioner=true in profile "ha-174833"
	I1030 18:39:55.766762  400041 addons.go:234] Setting addon storage-provisioner=true in "ha-174833"
	I1030 18:39:55.766765  400041 addons.go:69] Setting default-storageclass=true in profile "ha-174833"
	I1030 18:39:55.766787  400041 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174833"
	I1030 18:39:55.766793  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.766837  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:55.767201  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767204  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767229  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.767233  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.782451  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I1030 18:39:55.783028  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.783605  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.783632  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.783733  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I1030 18:39:55.784013  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.784063  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.784233  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.784551  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.784576  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.784948  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.785512  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.785543  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.786284  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.786639  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 18:39:55.787187  400041 cert_rotation.go:140] Starting client certificate rotation controller
	I1030 18:39:55.787507  400041 addons.go:234] Setting addon default-storageclass=true in "ha-174833"
	I1030 18:39:55.787549  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.787801  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.787828  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.801215  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I1030 18:39:55.801753  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.802347  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.802374  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.802582  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I1030 18:39:55.802754  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.802945  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.802995  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.803462  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.803485  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.803870  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.804468  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.804521  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.804806  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.807396  400041 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 18:39:55.808701  400041 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:55.808721  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 18:39:55.808736  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.812067  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812493  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.812517  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812683  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.812860  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.813040  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.813181  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.820594  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I1030 18:39:55.821053  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.821596  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.821614  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.821907  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.822100  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.823784  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.824021  400041 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.824035  400041 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 18:39:55.824050  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.826783  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827199  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.827215  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827366  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.827540  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.827698  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.827825  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.887739  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 18:39:55.976821  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.987770  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:56.358196  400041 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 18:39:56.358229  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358248  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358534  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358554  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358563  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358570  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358835  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.358837  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358856  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358917  400041 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 18:39:56.358934  400041 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 18:39:56.359097  400041 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1030 18:39:56.359111  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.359120  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.359128  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.431588  400041 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
	I1030 18:39:56.432175  400041 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1030 18:39:56.432191  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.432198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.432202  400041 round_trippers.go:473]     Content-Type: application/json
	I1030 18:39:56.432205  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.436115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:39:56.436287  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.436303  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.436618  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.436664  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.436672  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.590846  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.590868  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591203  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591227  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.591236  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.591244  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591478  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.591507  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591514  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.593000  400041 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1030 18:39:56.594031  400041 addons.go:510] duration metric: took 827.372801ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1030 18:39:56.594084  400041 start.go:246] waiting for cluster config update ...
	I1030 18:39:56.594100  400041 start.go:255] writing updated cluster config ...
	I1030 18:39:56.595822  400041 out.go:201] 
	I1030 18:39:56.597023  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:56.597115  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.598537  400041 out.go:177] * Starting "ha-174833-m02" control-plane node in "ha-174833" cluster
	I1030 18:39:56.599471  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:56.599502  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:56.599603  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:56.599621  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:56.599722  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.599927  400041 start.go:360] acquireMachinesLock for ha-174833-m02: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:56.599988  400041 start.go:364] duration metric: took 32.769µs to acquireMachinesLock for "ha-174833-m02"
	I1030 18:39:56.600025  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:56.600106  400041 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1030 18:39:56.601604  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:56.601698  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:56.601732  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:56.616291  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I1030 18:39:56.616777  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:56.617304  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:56.617323  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:56.617636  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:56.617791  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:39:56.617923  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:39:56.618073  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:56.618098  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:56.618131  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:56.618179  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618201  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618275  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:56.618304  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618320  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618344  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:56.618355  400041 main.go:141] libmachine: (ha-174833-m02) Calling .PreCreateCheck
	I1030 18:39:56.618511  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:39:56.618831  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:56.618844  400041 main.go:141] libmachine: (ha-174833-m02) Calling .Create
	I1030 18:39:56.618962  400041 main.go:141] libmachine: (ha-174833-m02) Creating KVM machine...
	I1030 18:39:56.620046  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing default KVM network
	I1030 18:39:56.620129  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing private KVM network mk-ha-174833
	I1030 18:39:56.620269  400041 main.go:141] libmachine: (ha-174833-m02) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:56.620295  400041 main.go:141] libmachine: (ha-174833-m02) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:56.620361  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.620250  400406 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:56.620446  400041 main.go:141] libmachine: (ha-174833-m02) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:56.895932  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.895765  400406 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa...
	I1030 18:39:57.037260  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037116  400406 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk...
	I1030 18:39:57.037293  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing magic tar header
	I1030 18:39:57.037303  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing SSH key tar header
	I1030 18:39:57.037311  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037233  400406 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:57.037321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02
	I1030 18:39:57.037404  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 (perms=drwx------)
	I1030 18:39:57.037429  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:57.037440  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:57.037453  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:57.037469  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:57.037479  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:57.037487  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:57.037494  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home
	I1030 18:39:57.037515  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Skipping /home - not owner
	I1030 18:39:57.037531  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:57.037546  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:57.037559  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:57.037569  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:57.037577  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:57.038511  400041 main.go:141] libmachine: (ha-174833-m02) define libvirt domain using xml: 
	I1030 18:39:57.038531  400041 main.go:141] libmachine: (ha-174833-m02) <domain type='kvm'>
	I1030 18:39:57.038538  400041 main.go:141] libmachine: (ha-174833-m02)   <name>ha-174833-m02</name>
	I1030 18:39:57.038542  400041 main.go:141] libmachine: (ha-174833-m02)   <memory unit='MiB'>2200</memory>
	I1030 18:39:57.038549  400041 main.go:141] libmachine: (ha-174833-m02)   <vcpu>2</vcpu>
	I1030 18:39:57.038556  400041 main.go:141] libmachine: (ha-174833-m02)   <features>
	I1030 18:39:57.038563  400041 main.go:141] libmachine: (ha-174833-m02)     <acpi/>
	I1030 18:39:57.038569  400041 main.go:141] libmachine: (ha-174833-m02)     <apic/>
	I1030 18:39:57.038577  400041 main.go:141] libmachine: (ha-174833-m02)     <pae/>
	I1030 18:39:57.038587  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.038594  400041 main.go:141] libmachine: (ha-174833-m02)   </features>
	I1030 18:39:57.038601  400041 main.go:141] libmachine: (ha-174833-m02)   <cpu mode='host-passthrough'>
	I1030 18:39:57.038605  400041 main.go:141] libmachine: (ha-174833-m02)   
	I1030 18:39:57.038610  400041 main.go:141] libmachine: (ha-174833-m02)   </cpu>
	I1030 18:39:57.038636  400041 main.go:141] libmachine: (ha-174833-m02)   <os>
	I1030 18:39:57.038660  400041 main.go:141] libmachine: (ha-174833-m02)     <type>hvm</type>
	I1030 18:39:57.038683  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='cdrom'/>
	I1030 18:39:57.038700  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='hd'/>
	I1030 18:39:57.038708  400041 main.go:141] libmachine: (ha-174833-m02)     <bootmenu enable='no'/>
	I1030 18:39:57.038712  400041 main.go:141] libmachine: (ha-174833-m02)   </os>
	I1030 18:39:57.038717  400041 main.go:141] libmachine: (ha-174833-m02)   <devices>
	I1030 18:39:57.038725  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='cdrom'>
	I1030 18:39:57.038734  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/boot2docker.iso'/>
	I1030 18:39:57.038744  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:57.038752  400041 main.go:141] libmachine: (ha-174833-m02)       <readonly/>
	I1030 18:39:57.038764  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038780  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='disk'>
	I1030 18:39:57.038790  400041 main.go:141] libmachine: (ha-174833-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:57.038805  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk'/>
	I1030 18:39:57.038815  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hda' bus='virtio'/>
	I1030 18:39:57.038825  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038832  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038844  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='mk-ha-174833'/>
	I1030 18:39:57.038858  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038874  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038892  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038901  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='default'/>
	I1030 18:39:57.038911  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038922  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038931  400041 main.go:141] libmachine: (ha-174833-m02)     <serial type='pty'>
	I1030 18:39:57.038937  400041 main.go:141] libmachine: (ha-174833-m02)       <target port='0'/>
	I1030 18:39:57.038943  400041 main.go:141] libmachine: (ha-174833-m02)     </serial>
	I1030 18:39:57.038948  400041 main.go:141] libmachine: (ha-174833-m02)     <console type='pty'>
	I1030 18:39:57.038955  400041 main.go:141] libmachine: (ha-174833-m02)       <target type='serial' port='0'/>
	I1030 18:39:57.038981  400041 main.go:141] libmachine: (ha-174833-m02)     </console>
	I1030 18:39:57.039004  400041 main.go:141] libmachine: (ha-174833-m02)     <rng model='virtio'>
	I1030 18:39:57.039017  400041 main.go:141] libmachine: (ha-174833-m02)       <backend model='random'>/dev/random</backend>
	I1030 18:39:57.039026  400041 main.go:141] libmachine: (ha-174833-m02)     </rng>
	I1030 18:39:57.039033  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039042  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039050  400041 main.go:141] libmachine: (ha-174833-m02)   </devices>
	I1030 18:39:57.039059  400041 main.go:141] libmachine: (ha-174833-m02) </domain>
	I1030 18:39:57.039073  400041 main.go:141] libmachine: (ha-174833-m02) 
	I1030 18:39:57.045751  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:a3:4c:dc in network default
	I1030 18:39:57.046326  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring networks are active...
	I1030 18:39:57.046349  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:57.047038  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network default is active
	I1030 18:39:57.047398  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network mk-ha-174833 is active
	I1030 18:39:57.047750  400041 main.go:141] libmachine: (ha-174833-m02) Getting domain xml...
	I1030 18:39:57.048296  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:58.272260  400041 main.go:141] libmachine: (ha-174833-m02) Waiting to get IP...
	I1030 18:39:58.273021  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.273425  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.273496  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.273425  400406 retry.go:31] will retry after 283.659874ms: waiting for machine to come up
	I1030 18:39:58.559077  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.559572  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.559595  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.559530  400406 retry.go:31] will retry after 285.421922ms: waiting for machine to come up
	I1030 18:39:58.847321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.847766  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.847795  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.847719  400406 retry.go:31] will retry after 459.416019ms: waiting for machine to come up
	I1030 18:39:59.308465  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.308944  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.309003  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.308931  400406 retry.go:31] will retry after 572.494843ms: waiting for machine to come up
	I1030 18:39:59.882664  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.883063  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.883097  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.883044  400406 retry.go:31] will retry after 513.18543ms: waiting for machine to come up
	I1030 18:40:00.397389  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:00.397747  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:00.397783  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:00.397729  400406 retry.go:31] will retry after 755.433082ms: waiting for machine to come up
	I1030 18:40:01.155395  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:01.155948  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:01.155979  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:01.155903  400406 retry.go:31] will retry after 1.038364995s: waiting for machine to come up
	I1030 18:40:02.195482  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:02.195950  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:02.195980  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:02.195911  400406 retry.go:31] will retry after 1.004508468s: waiting for machine to come up
	I1030 18:40:03.201825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:03.202261  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:03.202291  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:03.202205  400406 retry.go:31] will retry after 1.786384374s: waiting for machine to come up
	I1030 18:40:04.989943  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:04.990350  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:04.990371  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:04.990297  400406 retry.go:31] will retry after 1.895963981s: waiting for machine to come up
	I1030 18:40:06.888049  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:06.888464  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:06.888488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:06.888417  400406 retry.go:31] will retry after 1.948037202s: waiting for machine to come up
	I1030 18:40:08.839488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:08.839847  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:08.839869  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:08.839824  400406 retry.go:31] will retry after 3.202281785s: waiting for machine to come up
	I1030 18:40:12.043324  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:12.043675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:12.043695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:12.043618  400406 retry.go:31] will retry after 3.877667252s: waiting for machine to come up
	I1030 18:40:15.924012  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:15.924431  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:15.924456  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:15.924364  400406 retry.go:31] will retry after 3.471906375s: waiting for machine to come up
	I1030 18:40:19.399252  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has current primary IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399693  400041 main.go:141] libmachine: (ha-174833-m02) Found IP for machine: 192.168.39.67
	I1030 18:40:19.399744  400041 main.go:141] libmachine: (ha-174833-m02) Reserving static IP address...
	I1030 18:40:19.400103  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find host DHCP lease matching {name: "ha-174833-m02", mac: "52:54:00:87:fa:1a", ip: "192.168.39.67"} in network mk-ha-174833
	I1030 18:40:19.473268  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Getting to WaitForSSH function...
	I1030 18:40:19.473299  400041 main.go:141] libmachine: (ha-174833-m02) Reserved static IP address: 192.168.39.67
	I1030 18:40:19.473352  400041 main.go:141] libmachine: (ha-174833-m02) Waiting for SSH to be available...
	I1030 18:40:19.476054  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476545  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.476573  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476733  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH client type: external
	I1030 18:40:19.476781  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa (-rw-------)
	I1030 18:40:19.476820  400041 main.go:141] libmachine: (ha-174833-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:40:19.476836  400041 main.go:141] libmachine: (ha-174833-m02) DBG | About to run SSH command:
	I1030 18:40:19.476843  400041 main.go:141] libmachine: (ha-174833-m02) DBG | exit 0
	I1030 18:40:19.602200  400041 main.go:141] libmachine: (ha-174833-m02) DBG | SSH cmd err, output: <nil>: 
	I1030 18:40:19.602526  400041 main.go:141] libmachine: (ha-174833-m02) KVM machine creation complete!
	I1030 18:40:19.602867  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:19.603528  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603721  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603921  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:40:19.603937  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetState
	I1030 18:40:19.605043  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:40:19.605054  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:40:19.605059  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:40:19.605064  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.607164  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607533  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.607561  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607643  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.607921  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608107  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608292  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.608458  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.608704  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.608730  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:40:19.709697  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:19.709726  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:40:19.709734  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.712480  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.712863  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.712908  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.713089  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.713318  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713620  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.713800  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.714020  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.714034  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:40:19.823287  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:40:19.823400  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:40:19.823413  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:40:19.823424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823703  400041 buildroot.go:166] provisioning hostname "ha-174833-m02"
	I1030 18:40:19.823731  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823950  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.826635  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827060  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.827086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827137  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.827303  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827602  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.827740  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.827922  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.827936  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m02 && echo "ha-174833-m02" | sudo tee /etc/hostname
	I1030 18:40:19.945348  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m02
	
	I1030 18:40:19.945376  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.948392  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948756  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.948806  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948936  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.949124  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949286  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949399  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.949565  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.949742  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.949759  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:40:20.059828  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:20.059870  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:40:20.059905  400041 buildroot.go:174] setting up certificates
	I1030 18:40:20.059915  400041 provision.go:84] configureAuth start
	I1030 18:40:20.059930  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:20.060203  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.062825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063237  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.063262  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063417  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.065380  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.065725  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065881  400041 provision.go:143] copyHostCerts
	I1030 18:40:20.065925  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066007  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:40:20.066020  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066101  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:40:20.066211  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066236  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:40:20.066244  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066288  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:40:20.066357  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066380  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:40:20.066386  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066420  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:40:20.066508  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m02 san=[127.0.0.1 192.168.39.67 ha-174833-m02 localhost minikube]
	I1030 18:40:20.314819  400041 provision.go:177] copyRemoteCerts
	I1030 18:40:20.314902  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:40:20.314940  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.317541  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.317873  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.317916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.318094  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.318304  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.318547  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.318726  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.405714  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:40:20.405820  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:40:20.431726  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:40:20.431798  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:40:20.455138  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:40:20.455222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 18:40:20.477773  400041 provision.go:87] duration metric: took 417.842724ms to configureAuth
	I1030 18:40:20.477806  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:40:20.478018  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:20.478120  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.480885  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481224  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.481250  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.481637  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481775  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481966  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.482148  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.482322  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.482338  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:40:20.706339  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:40:20.706375  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:40:20.706387  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetURL
	I1030 18:40:20.707589  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using libvirt version 6000000
	I1030 18:40:20.709597  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.709934  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.709964  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.710106  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:40:20.710135  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:40:20.710147  400041 client.go:171] duration metric: took 24.092036555s to LocalClient.Create
	I1030 18:40:20.710176  400041 start.go:167] duration metric: took 24.092106335s to libmachine.API.Create "ha-174833"
	I1030 18:40:20.710186  400041 start.go:293] postStartSetup for "ha-174833-m02" (driver="kvm2")
	I1030 18:40:20.710195  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:40:20.710231  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.710468  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:40:20.710503  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.712432  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712689  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.712717  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712824  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.713017  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.713185  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.713308  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.793164  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:40:20.797557  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:40:20.797583  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:40:20.797648  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:40:20.797720  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:40:20.797730  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:40:20.797807  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:40:20.807375  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:20.830866  400041 start.go:296] duration metric: took 120.664021ms for postStartSetup
	I1030 18:40:20.830929  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:20.831701  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.834714  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.835116  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835438  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:40:20.835668  400041 start.go:128] duration metric: took 24.235548343s to createHost
	I1030 18:40:20.835700  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.837613  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.837888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.837916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.838041  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.838176  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838317  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.838592  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.838755  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.838765  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:40:20.939393  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313620.914818123
	
	I1030 18:40:20.939419  400041 fix.go:216] guest clock: 1730313620.914818123
	I1030 18:40:20.939430  400041 fix.go:229] Guest: 2024-10-30 18:40:20.914818123 +0000 UTC Remote: 2024-10-30 18:40:20.835684734 +0000 UTC m=+67.590472244 (delta=79.133389ms)
	I1030 18:40:20.939453  400041 fix.go:200] guest clock delta is within tolerance: 79.133389ms
	I1030 18:40:20.939460  400041 start.go:83] releasing machines lock for "ha-174833-m02", held for 24.339459492s
	I1030 18:40:20.939487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.939802  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.942445  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.942801  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.942827  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.945268  400041 out.go:177] * Found network options:
	I1030 18:40:20.946721  400041 out.go:177]   - NO_PROXY=192.168.39.141
	W1030 18:40:20.947877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.947925  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948482  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948657  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948763  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:40:20.948808  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	W1030 18:40:20.948877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.948974  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:40:20.948998  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.951510  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951591  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951860  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951890  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951926  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.952047  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952193  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952262  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952409  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952476  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952535  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952595  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.952723  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:21.182304  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:40:21.188738  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:40:21.188808  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:40:21.205984  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:40:21.206007  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:40:21.206074  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:40:21.221839  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:40:21.235753  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:40:21.235807  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:40:21.249998  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:40:21.263401  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:40:21.372667  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:40:21.535477  400041 docker.go:233] disabling docker service ...
	I1030 18:40:21.535567  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:40:21.549384  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:40:21.561708  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:40:21.680746  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:40:21.800498  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:40:21.815096  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:40:21.833550  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:40:21.833622  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.843823  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:40:21.843902  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.854106  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.864400  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.874387  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:40:21.884560  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.895371  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.913811  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.924236  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:40:21.933153  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:40:21.933202  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:40:21.946248  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:40:21.955404  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:22.069005  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:40:22.157442  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:40:22.157509  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:40:22.162047  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:40:22.162100  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:40:22.165636  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:40:22.205156  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:40:22.205267  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.231913  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.261339  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:40:22.262679  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:40:22.263832  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:22.266556  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.266888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:22.266915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.267123  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:40:22.271259  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:22.283359  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:40:22.283542  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:22.283792  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.283835  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.298878  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1030 18:40:22.299305  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.299796  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.299822  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.300116  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.300325  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:40:22.301824  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:22.302129  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.302167  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.316968  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I1030 18:40:22.317445  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.317883  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.317906  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.318227  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.318396  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:22.318552  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.67
	I1030 18:40:22.318566  400041 certs.go:194] generating shared ca certs ...
	I1030 18:40:22.318581  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.318722  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:40:22.318763  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:40:22.318772  400041 certs.go:256] generating profile certs ...
	I1030 18:40:22.318861  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:40:22.318884  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801
	I1030 18:40:22.318898  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.254]
	I1030 18:40:22.389619  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 ...
	I1030 18:40:22.389649  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801: {Name:mk69c03eb6b5f0b4d0acc4a4891d260deacb4aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389835  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 ...
	I1030 18:40:22.389853  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801: {Name:mkc4587720139321b37dc723905edfa912a066e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389954  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:40:22.390078  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:40:22.390209  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:40:22.390226  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:40:22.390240  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:40:22.390253  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:40:22.390265  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:40:22.390276  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:40:22.390291  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:40:22.390303  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:40:22.390314  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:40:22.390363  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:40:22.390392  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:40:22.390401  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:40:22.390423  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:40:22.390447  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:40:22.390467  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:40:22.390526  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:22.390551  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:22.390567  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.390579  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.390609  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:22.393533  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.393916  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:22.393937  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.394139  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:22.394328  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:22.394468  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:22.394599  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:22.466820  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:40:22.472172  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:40:22.483413  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:40:22.487802  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:40:22.498142  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:40:22.502005  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:40:22.511789  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:40:22.516194  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:40:22.526092  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:40:22.530300  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:40:22.539761  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:40:22.543659  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:40:22.554032  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:40:22.579429  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:40:22.603366  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:40:22.627011  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:40:22.649824  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1030 18:40:22.675859  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 18:40:22.702878  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:40:22.729191  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:40:22.755783  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:40:22.781937  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:40:22.806557  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:40:22.829559  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:40:22.845492  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:40:22.861140  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:40:22.877798  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:40:22.894364  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:40:22.910766  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:40:22.926975  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:40:22.944058  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:40:22.949888  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:40:22.960383  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964756  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964810  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.970419  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:40:22.980880  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:40:22.991033  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995374  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995440  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:40:23.000879  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:40:23.011335  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:40:23.021800  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026327  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026385  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.032188  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:40:23.042278  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:40:23.046274  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:40:23.046324  400041 kubeadm.go:934] updating node {m02 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1030 18:40:23.046424  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:40:23.046460  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:40:23.046517  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:40:23.063163  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:40:23.063236  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:40:23.063297  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.072465  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:40:23.072510  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.081550  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:40:23.081576  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.081589  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1030 18:40:23.081602  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1030 18:40:23.081619  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.085961  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:40:23.085992  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:40:24.328288  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.328373  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.333326  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:40:24.333359  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:40:24.830276  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:40:24.845774  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.845893  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.850314  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:40:24.850355  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:40:25.162230  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:40:25.172064  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1030 18:40:25.188645  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:40:25.204815  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:40:25.221977  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:40:25.225934  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:25.237891  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:25.349561  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:25.366698  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:25.367180  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:25.367246  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:25.384828  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I1030 18:40:25.385432  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:25.386031  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:25.386061  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:25.386434  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:25.386621  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:25.386806  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:40:25.386959  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:40:25.386986  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:25.389976  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390481  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:25.390522  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390674  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:25.390889  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:25.391033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:25.391170  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:25.547459  400041 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:25.547519  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443"
	I1030 18:40:46.568187  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443": (21.020635274s)
	I1030 18:40:46.568229  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:40:47.028345  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m02 minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:40:47.150726  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:40:47.264922  400041 start.go:319] duration metric: took 21.878113098s to joinCluster
	I1030 18:40:47.265016  400041 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:47.265346  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:47.267451  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:40:47.268676  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:47.482830  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:47.498911  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:40:47.499271  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:40:47.499361  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:40:47.499634  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:40:47.499754  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:47.499765  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:47.499776  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:47.499780  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:47.513589  400041 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1030 18:40:48.000627  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.000717  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.000732  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.000739  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.005027  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:48.500527  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.500553  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.500562  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.500566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.507486  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:40:48.999957  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.999981  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.999992  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.999998  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.004072  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:49.500009  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:49.500034  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:49.500044  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:49.500049  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.503688  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:49.504299  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:50.000762  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.000787  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.000798  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.000804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.004710  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.500222  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.500249  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.500261  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.500268  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.503800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.999915  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.999941  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.999949  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.999953  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.003089  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:51.500241  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:51.500270  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:51.500282  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:51.500288  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.503181  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:52.000665  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.000687  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.000696  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.000701  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.004020  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:52.004537  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:52.500784  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.500807  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.500815  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.500820  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.503534  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:53.000339  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.000361  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.000372  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.000377  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.003704  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:53.500343  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.500365  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.500373  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.500378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.503510  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.000354  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.000381  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.000395  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.000403  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.004115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.004763  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:54.500127  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.500152  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.500161  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.500166  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.503778  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.000747  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.000778  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.000791  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.000797  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.004570  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.500357  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.500405  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.500415  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.500420  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.504113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:56.000848  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.000872  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.000890  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.000895  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.005204  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:56.006300  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:56.500116  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.500139  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.500149  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.500156  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.503736  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.000020  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.000047  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.000059  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.000064  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.003517  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.500475  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.500507  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.500519  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.500528  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.504454  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.999844  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.999871  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.999880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.999884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.003233  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:58.500239  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:58.500265  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:58.500275  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:58.500280  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.503241  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:58.504056  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:59.000302  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.000325  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.000335  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.000338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.003378  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.500257  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.500293  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.500305  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.500311  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.503678  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.999943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.999974  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.999984  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.999988  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.003694  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.499870  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:00.499894  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:00.499903  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:00.499906  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.503912  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.504852  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:01.000256  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.000287  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.000303  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.000310  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.004687  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:01.500249  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.500275  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.500286  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.500292  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.503725  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.000125  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.000149  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.000159  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.000163  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.003110  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:02.500738  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.500764  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.500774  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.500779  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.504318  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.504919  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:03.000323  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.000348  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.000361  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.000369  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.003869  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:03.500542  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.500568  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.500579  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.500585  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.503602  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:04.000594  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.000622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.000633  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.000639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.003714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.500712  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.500736  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.500746  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.500752  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.503791  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.999910  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.999934  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.999943  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.999948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.003533  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:05.004088  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:05.500597  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:05.500622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:05.500630  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:05.500639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.503501  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:06.000616  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.000647  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.000659  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.000667  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.004719  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:06.500833  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.500855  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.500864  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.500868  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.504070  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.000429  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.000469  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.000481  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.000487  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.003689  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.004389  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:07.500634  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.500659  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.500670  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.500676  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.503714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.000797  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.000823  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.000835  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.000839  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.004162  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.500552  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.500576  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.500584  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.500588  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.503781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.504368  400041 node_ready.go:49] node "ha-174833-m02" has status "Ready":"True"
	I1030 18:41:08.504387  400041 node_ready.go:38] duration metric: took 21.004733688s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:41:08.504399  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:08.504510  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:08.504522  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.504533  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.504540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.508519  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.514243  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.514348  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:41:08.514359  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.514370  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.514375  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.517179  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.518000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.518014  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.518021  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.518026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.520277  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.520732  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.520749  400041 pod_ready.go:82] duration metric: took 6.484522ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520758  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520818  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:41:08.520826  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.520832  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.520837  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.523187  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.523748  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.523763  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.523770  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.523773  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.525598  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.526045  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.526061  400041 pod_ready.go:82] duration metric: took 5.296844ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526073  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:41:08.526137  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.526147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.526155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.528137  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.528632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.528646  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.528653  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.528656  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.530536  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.530970  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.530985  400041 pod_ready.go:82] duration metric: took 4.904104ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.530995  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.531044  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:41:08.531054  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.531063  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.531071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.532895  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.533572  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.533585  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.533592  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.533598  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.535476  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.535920  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.535936  400041 pod_ready.go:82] duration metric: took 4.934707ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.535947  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.701353  400041 request.go:632] Waited for 165.322436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701427  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701434  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.701445  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.701455  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.704722  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.900709  400041 request.go:632] Waited for 195.283762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900771  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900777  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.900787  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.900793  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.903675  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.904204  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.904224  400041 pod_ready.go:82] duration metric: took 368.270404ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.904235  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.101325  400041 request.go:632] Waited for 196.99596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101392  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101397  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.101406  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.101414  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.104943  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.301209  400041 request.go:632] Waited for 195.378832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301280  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301286  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.301294  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.301299  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.304703  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.305150  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.305171  400041 pod_ready.go:82] duration metric: took 400.929601ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.305183  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.501368  400041 request.go:632] Waited for 196.079315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501455  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501468  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.501478  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.501486  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.505228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.701240  400041 request.go:632] Waited for 195.369784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701322  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.701331  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.701334  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.703994  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:09.704752  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.704770  400041 pod_ready.go:82] duration metric: took 399.581191ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.704781  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.900901  400041 request.go:632] Waited for 196.026591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900964  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900969  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.900978  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.900983  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.904074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.101112  400041 request.go:632] Waited for 196.368613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101194  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101205  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.101214  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.101226  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.104324  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.104744  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.104763  400041 pod_ready.go:82] duration metric: took 399.976925ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.104774  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.300860  400041 request.go:632] Waited for 196.007769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300949  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.300957  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.300968  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.304042  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.501291  400041 request.go:632] Waited for 196.406771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501358  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501363  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.501372  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.501378  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.504471  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.504946  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.504966  400041 pod_ready.go:82] duration metric: took 400.186291ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.504985  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.701128  400041 request.go:632] Waited for 196.042962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701198  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701203  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.701211  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.701218  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.704595  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.900756  400041 request.go:632] Waited for 195.290492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900855  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900861  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.900869  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.900878  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.904332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.904829  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.904849  400041 pod_ready.go:82] duration metric: took 399.858433ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.904860  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.101047  400041 request.go:632] Waited for 196.091867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101112  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101117  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.101125  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.101130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.104800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.300654  400041 request.go:632] Waited for 195.298322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300720  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300731  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.300740  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.300743  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.304342  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.304796  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.304815  400041 pod_ready.go:82] duration metric: took 399.947891ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.304826  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.500975  400041 request.go:632] Waited for 196.04993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501040  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501045  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.501052  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.501057  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.504438  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.701379  400041 request.go:632] Waited for 196.340488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701443  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701449  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.701457  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.701462  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.704386  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:11.704831  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.704850  400041 pod_ready.go:82] duration metric: took 400.015715ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.704863  400041 pod_ready.go:39] duration metric: took 3.200450336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:11.704882  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:41:11.704944  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:41:11.723542  400041 api_server.go:72] duration metric: took 24.458488953s to wait for apiserver process to appear ...
	I1030 18:41:11.723564  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:41:11.723583  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:41:11.729129  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:41:11.729191  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:41:11.729199  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.729206  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.729213  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.729902  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:41:11.729987  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:41:11.730004  400041 api_server.go:131] duration metric: took 6.434971ms to wait for apiserver health ...
	I1030 18:41:11.730015  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:41:11.901454  400041 request.go:632] Waited for 171.341792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901536  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901542  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.901550  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.901554  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.906457  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:11.911360  400041 system_pods.go:59] 17 kube-system pods found
	I1030 18:41:11.911389  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:11.911396  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:11.911402  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:11.911408  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:11.911413  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:11.911418  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:11.911424  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:11.911432  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:11.911437  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:11.911440  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:11.911444  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:11.911447  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:11.911452  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:11.911458  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:11.911461  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:11.911464  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:11.911467  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:11.911474  400041 system_pods.go:74] duration metric: took 181.449525ms to wait for pod list to return data ...
	I1030 18:41:11.911484  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:41:12.100968  400041 request.go:632] Waited for 189.365167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101038  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.101046  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.101054  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.104878  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:12.105115  400041 default_sa.go:45] found service account: "default"
	I1030 18:41:12.105131  400041 default_sa.go:55] duration metric: took 193.641266ms for default service account to be created ...
	I1030 18:41:12.105141  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:41:12.301355  400041 request.go:632] Waited for 196.109942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301420  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301425  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.301433  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.301438  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.306382  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.311406  400041 system_pods.go:86] 17 kube-system pods found
	I1030 18:41:12.311437  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:12.311446  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:12.311454  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:12.311460  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:12.311465  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:12.311471  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:12.311477  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:12.311486  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:12.311492  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:12.311502  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:12.311509  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:12.311517  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:12.311525  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:12.311531  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:12.311540  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:12.311546  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:12.311554  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:12.311563  400041 system_pods.go:126] duration metric: took 206.414957ms to wait for k8s-apps to be running ...
	I1030 18:41:12.311574  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:41:12.311636  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:12.327021  400041 system_svc.go:56] duration metric: took 15.42192ms WaitForService to wait for kubelet
	I1030 18:41:12.327057  400041 kubeadm.go:582] duration metric: took 25.062007913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:41:12.327076  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:41:12.501567  400041 request.go:632] Waited for 174.380598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501638  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.501647  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.501651  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.505969  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.506702  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506731  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506744  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506747  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506751  400041 node_conditions.go:105] duration metric: took 179.67107ms to run NodePressure ...
	I1030 18:41:12.506763  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:41:12.506788  400041 start.go:255] writing updated cluster config ...
	I1030 18:41:12.509015  400041 out.go:201] 
	I1030 18:41:12.510595  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:12.510702  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.512413  400041 out.go:177] * Starting "ha-174833-m03" control-plane node in "ha-174833" cluster
	I1030 18:41:12.513538  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:41:12.513560  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:41:12.513661  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:41:12.513676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:41:12.513774  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.513991  400041 start.go:360] acquireMachinesLock for ha-174833-m03: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:41:12.514046  400041 start.go:364] duration metric: took 32.901µs to acquireMachinesLock for "ha-174833-m03"
	I1030 18:41:12.514072  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:12.514208  400041 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1030 18:41:12.515720  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:41:12.515810  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:12.515845  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:12.531298  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I1030 18:41:12.531779  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:12.532302  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:12.532328  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:12.532695  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:12.532932  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:12.533094  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:12.533248  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:41:12.533281  400041 client.go:168] LocalClient.Create starting
	I1030 18:41:12.533344  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:41:12.533389  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533410  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533483  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:41:12.533512  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533529  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533556  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:41:12.533582  400041 main.go:141] libmachine: (ha-174833-m03) Calling .PreCreateCheck
	I1030 18:41:12.533754  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:12.534141  400041 main.go:141] libmachine: Creating machine...
	I1030 18:41:12.534155  400041 main.go:141] libmachine: (ha-174833-m03) Calling .Create
	I1030 18:41:12.534316  400041 main.go:141] libmachine: (ha-174833-m03) Creating KVM machine...
	I1030 18:41:12.535469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing default KVM network
	I1030 18:41:12.535689  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing private KVM network mk-ha-174833
	I1030 18:41:12.535839  400041 main.go:141] libmachine: (ha-174833-m03) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.535890  400041 main.go:141] libmachine: (ha-174833-m03) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:41:12.535946  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.535806  400817 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.536022  400041 main.go:141] libmachine: (ha-174833-m03) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:41:12.821754  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.821614  400817 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa...
	I1030 18:41:12.940970  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940841  400817 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk...
	I1030 18:41:12.941002  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing magic tar header
	I1030 18:41:12.941016  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing SSH key tar header
	I1030 18:41:12.941027  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940965  400817 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.941045  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03
	I1030 18:41:12.941128  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 (perms=drwx------)
	I1030 18:41:12.941149  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:41:12.941160  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:41:12.941183  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:41:12.941197  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:41:12.941212  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:41:12.941227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.941239  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:41:12.941248  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:41:12.941259  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:12.941276  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:41:12.941291  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:41:12.941301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home
	I1030 18:41:12.941315  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Skipping /home - not owner
	I1030 18:41:12.942234  400041 main.go:141] libmachine: (ha-174833-m03) define libvirt domain using xml: 
	I1030 18:41:12.942260  400041 main.go:141] libmachine: (ha-174833-m03) <domain type='kvm'>
	I1030 18:41:12.942270  400041 main.go:141] libmachine: (ha-174833-m03)   <name>ha-174833-m03</name>
	I1030 18:41:12.942277  400041 main.go:141] libmachine: (ha-174833-m03)   <memory unit='MiB'>2200</memory>
	I1030 18:41:12.942286  400041 main.go:141] libmachine: (ha-174833-m03)   <vcpu>2</vcpu>
	I1030 18:41:12.942296  400041 main.go:141] libmachine: (ha-174833-m03)   <features>
	I1030 18:41:12.942305  400041 main.go:141] libmachine: (ha-174833-m03)     <acpi/>
	I1030 18:41:12.942315  400041 main.go:141] libmachine: (ha-174833-m03)     <apic/>
	I1030 18:41:12.942326  400041 main.go:141] libmachine: (ha-174833-m03)     <pae/>
	I1030 18:41:12.942335  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942346  400041 main.go:141] libmachine: (ha-174833-m03)   </features>
	I1030 18:41:12.942353  400041 main.go:141] libmachine: (ha-174833-m03)   <cpu mode='host-passthrough'>
	I1030 18:41:12.942387  400041 main.go:141] libmachine: (ha-174833-m03)   
	I1030 18:41:12.942411  400041 main.go:141] libmachine: (ha-174833-m03)   </cpu>
	I1030 18:41:12.942424  400041 main.go:141] libmachine: (ha-174833-m03)   <os>
	I1030 18:41:12.942433  400041 main.go:141] libmachine: (ha-174833-m03)     <type>hvm</type>
	I1030 18:41:12.942446  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='cdrom'/>
	I1030 18:41:12.942456  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='hd'/>
	I1030 18:41:12.942469  400041 main.go:141] libmachine: (ha-174833-m03)     <bootmenu enable='no'/>
	I1030 18:41:12.942502  400041 main.go:141] libmachine: (ha-174833-m03)   </os>
	I1030 18:41:12.942521  400041 main.go:141] libmachine: (ha-174833-m03)   <devices>
	I1030 18:41:12.942532  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='cdrom'>
	I1030 18:41:12.942543  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/boot2docker.iso'/>
	I1030 18:41:12.942552  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hdc' bus='scsi'/>
	I1030 18:41:12.942561  400041 main.go:141] libmachine: (ha-174833-m03)       <readonly/>
	I1030 18:41:12.942566  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942574  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='disk'>
	I1030 18:41:12.942581  400041 main.go:141] libmachine: (ha-174833-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:41:12.942587  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk'/>
	I1030 18:41:12.942606  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hda' bus='virtio'/>
	I1030 18:41:12.942619  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942627  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942635  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='mk-ha-174833'/>
	I1030 18:41:12.942648  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942658  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942670  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942697  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='default'/>
	I1030 18:41:12.942736  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942764  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942779  400041 main.go:141] libmachine: (ha-174833-m03)     <serial type='pty'>
	I1030 18:41:12.942790  400041 main.go:141] libmachine: (ha-174833-m03)       <target port='0'/>
	I1030 18:41:12.942802  400041 main.go:141] libmachine: (ha-174833-m03)     </serial>
	I1030 18:41:12.942812  400041 main.go:141] libmachine: (ha-174833-m03)     <console type='pty'>
	I1030 18:41:12.942823  400041 main.go:141] libmachine: (ha-174833-m03)       <target type='serial' port='0'/>
	I1030 18:41:12.942832  400041 main.go:141] libmachine: (ha-174833-m03)     </console>
	I1030 18:41:12.942841  400041 main.go:141] libmachine: (ha-174833-m03)     <rng model='virtio'>
	I1030 18:41:12.942852  400041 main.go:141] libmachine: (ha-174833-m03)       <backend model='random'>/dev/random</backend>
	I1030 18:41:12.942885  400041 main.go:141] libmachine: (ha-174833-m03)     </rng>
	I1030 18:41:12.942907  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942929  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942938  400041 main.go:141] libmachine: (ha-174833-m03)   </devices>
	I1030 18:41:12.942946  400041 main.go:141] libmachine: (ha-174833-m03) </domain>
	I1030 18:41:12.942957  400041 main.go:141] libmachine: (ha-174833-m03) 
	I1030 18:41:12.949898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:1a:b3:c5 in network default
	I1030 18:41:12.950445  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring networks are active...
	I1030 18:41:12.950469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:12.951138  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network default is active
	I1030 18:41:12.951462  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network mk-ha-174833 is active
	I1030 18:41:12.951841  400041 main.go:141] libmachine: (ha-174833-m03) Getting domain xml...
	I1030 18:41:12.952538  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:14.179359  400041 main.go:141] libmachine: (ha-174833-m03) Waiting to get IP...
	I1030 18:41:14.180307  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.180744  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.180812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.180741  400817 retry.go:31] will retry after 293.822494ms: waiting for machine to come up
	I1030 18:41:14.476270  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.476758  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.476784  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.476703  400817 retry.go:31] will retry after 283.345671ms: waiting for machine to come up
	I1030 18:41:14.761301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.761803  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.761833  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.761750  400817 retry.go:31] will retry after 299.766753ms: waiting for machine to come up
	I1030 18:41:15.063146  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.063613  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.063642  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.063557  400817 retry.go:31] will retry after 490.461635ms: waiting for machine to come up
	I1030 18:41:15.557014  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.557549  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.557577  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.557492  400817 retry.go:31] will retry after 739.117277ms: waiting for machine to come up
	I1030 18:41:16.298461  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.298926  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.298956  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.298870  400817 retry.go:31] will retry after 666.546188ms: waiting for machine to come up
	I1030 18:41:16.966687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.967172  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.967200  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.967117  400817 retry.go:31] will retry after 846.088379ms: waiting for machine to come up
	I1030 18:41:17.814898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:17.815410  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:17.815440  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:17.815362  400817 retry.go:31] will retry after 1.085711576s: waiting for machine to come up
	I1030 18:41:18.902574  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:18.902922  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:18.902952  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:18.902876  400817 retry.go:31] will retry after 1.834126575s: waiting for machine to come up
	I1030 18:41:20.739528  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:20.739890  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:20.739919  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:20.739850  400817 retry.go:31] will retry after 2.105862328s: waiting for machine to come up
	I1030 18:41:22.847426  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:22.847835  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:22.847867  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:22.847766  400817 retry.go:31] will retry after 2.441796021s: waiting for machine to come up
	I1030 18:41:25.291422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:25.291864  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:25.291888  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:25.291812  400817 retry.go:31] will retry after 2.18908754s: waiting for machine to come up
	I1030 18:41:27.484272  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:27.484720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:27.484740  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:27.484674  400817 retry.go:31] will retry after 3.249594938s: waiting for machine to come up
	I1030 18:41:30.735386  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:30.735687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:30.735711  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:30.735669  400817 retry.go:31] will retry after 5.542117345s: waiting for machine to come up
	I1030 18:41:36.279557  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.279987  400041 main.go:141] libmachine: (ha-174833-m03) Found IP for machine: 192.168.39.238
	I1030 18:41:36.280005  400041 main.go:141] libmachine: (ha-174833-m03) Reserving static IP address...
	I1030 18:41:36.280019  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.280379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "ha-174833-m03", mac: "52:54:00:76:9d:ad", ip: "192.168.39.238"} in network mk-ha-174833
	I1030 18:41:36.353555  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:36.353581  400041 main.go:141] libmachine: (ha-174833-m03) Reserved static IP address: 192.168.39.238
	I1030 18:41:36.353628  400041 main.go:141] libmachine: (ha-174833-m03) Waiting for SSH to be available...
	I1030 18:41:36.356187  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.356543  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833
	I1030 18:41:36.356569  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find defined IP address of network mk-ha-174833 interface with MAC address 52:54:00:76:9d:ad
	I1030 18:41:36.356719  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:36.356745  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:36.356795  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:36.356814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:36.356847  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:36.360778  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: exit status 255: 
	I1030 18:41:36.360804  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1030 18:41:36.360814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | command : exit 0
	I1030 18:41:36.360821  400041 main.go:141] libmachine: (ha-174833-m03) DBG | err     : exit status 255
	I1030 18:41:36.360832  400041 main.go:141] libmachine: (ha-174833-m03) DBG | output  : 
	I1030 18:41:39.361300  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:39.363671  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364021  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.364051  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364131  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:39.364170  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:39.364209  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:39.364227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:39.364236  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:39.498991  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: <nil>: 
	I1030 18:41:39.499302  400041 main.go:141] libmachine: (ha-174833-m03) KVM machine creation complete!
	I1030 18:41:39.499653  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:39.500359  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500567  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500834  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:41:39.500852  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetState
	I1030 18:41:39.502063  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:41:39.502076  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:41:39.502081  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:41:39.502086  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.504584  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.504838  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.504860  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.505021  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.505207  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505493  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.505642  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.505855  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.505867  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:41:39.613705  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.613730  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:41:39.613737  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.616442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616787  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.616812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616966  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.617171  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617381  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617494  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.617635  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.617821  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.617831  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:41:39.731009  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:41:39.731096  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:41:39.731110  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:41:39.731120  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731355  400041 buildroot.go:166] provisioning hostname "ha-174833-m03"
	I1030 18:41:39.731385  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731563  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.734727  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735195  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.735225  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735395  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.735599  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735773  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735975  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.736185  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.736419  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.736443  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m03 && echo "ha-174833-m03" | sudo tee /etc/hostname
	I1030 18:41:39.865251  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m03
	
	I1030 18:41:39.865295  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.868277  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868776  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.868811  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868979  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.869210  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869426  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869574  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.869780  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.870007  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.870023  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:41:39.993047  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.993077  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:41:39.993099  400041 buildroot.go:174] setting up certificates
	I1030 18:41:39.993114  400041 provision.go:84] configureAuth start
	I1030 18:41:39.993127  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.993439  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:39.996433  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.996840  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.996869  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.997060  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.000005  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.000450  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000565  400041 provision.go:143] copyHostCerts
	I1030 18:41:40.000594  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000629  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:41:40.000638  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000698  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:41:40.000806  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000825  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:41:40.000831  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000854  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:41:40.000910  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000926  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:41:40.000932  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000953  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:41:40.001003  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m03 san=[127.0.0.1 192.168.39.238 ha-174833-m03 localhost minikube]
	I1030 18:41:40.389110  400041 provision.go:177] copyRemoteCerts
	I1030 18:41:40.389174  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:41:40.389201  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.391720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392157  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.392191  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392466  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.392672  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.392854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.393003  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.485464  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:41:40.485543  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:41:40.513241  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:41:40.513314  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:41:40.537145  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:41:40.537239  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:41:40.562099  400041 provision.go:87] duration metric: took 568.966283ms to configureAuth
	I1030 18:41:40.562136  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:41:40.562357  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:40.562450  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.565158  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565531  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.565563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565700  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.565906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566083  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566192  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.566349  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.566539  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.566554  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:41:40.803791  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:41:40.803826  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:41:40.803835  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetURL
	I1030 18:41:40.805073  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using libvirt version 6000000
	I1030 18:41:40.807111  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.807592  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807738  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:41:40.807756  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:41:40.807765  400041 client.go:171] duration metric: took 28.27447273s to LocalClient.Create
	I1030 18:41:40.807794  400041 start.go:167] duration metric: took 28.274545509s to libmachine.API.Create "ha-174833"
	I1030 18:41:40.807813  400041 start.go:293] postStartSetup for "ha-174833-m03" (driver="kvm2")
	I1030 18:41:40.807829  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:41:40.807854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:40.808083  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:41:40.808112  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.810446  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810781  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.810810  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810951  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.811117  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.811251  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.811374  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.898250  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:41:40.902639  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:41:40.902670  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:41:40.902762  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:41:40.902838  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:41:40.902848  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:41:40.902930  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:41:40.911988  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:40.936666  400041 start.go:296] duration metric: took 128.83333ms for postStartSetup
	I1030 18:41:40.936732  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:40.937356  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:40.939940  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.940406  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940740  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:40.940959  400041 start.go:128] duration metric: took 28.426739922s to createHost
	I1030 18:41:40.940996  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.943340  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943659  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.943683  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943787  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.943992  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944157  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944299  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.944469  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.944647  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.944657  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:41:41.054995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313701.035748365
	
	I1030 18:41:41.055025  400041 fix.go:216] guest clock: 1730313701.035748365
	I1030 18:41:41.055036  400041 fix.go:229] Guest: 2024-10-30 18:41:41.035748365 +0000 UTC Remote: 2024-10-30 18:41:40.940974319 +0000 UTC m=+147.695761890 (delta=94.774046ms)
	I1030 18:41:41.055058  400041 fix.go:200] guest clock delta is within tolerance: 94.774046ms
	I1030 18:41:41.055065  400041 start.go:83] releasing machines lock for "ha-174833-m03", held for 28.541005951s
	I1030 18:41:41.055090  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.055377  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:41.057920  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.058257  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.058278  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.060653  400041 out.go:177] * Found network options:
	I1030 18:41:41.062139  400041 out.go:177]   - NO_PROXY=192.168.39.141,192.168.39.67
	W1030 18:41:41.063472  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.063496  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.063508  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064009  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064221  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064313  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:41:41.064352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	W1030 18:41:41.064451  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.064473  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.064552  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:41:41.064575  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:41.066853  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067199  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067222  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067302  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067479  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067664  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.067724  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067749  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067830  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.067906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067978  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.068065  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.068181  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.068275  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.314636  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:41:41.321102  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:41:41.321173  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:41:41.338442  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:41:41.338470  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:41:41.338554  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:41:41.355526  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:41:41.369752  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:41:41.369824  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:41:41.384658  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:41:41.399117  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:41:41.515988  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:41:41.659854  400041 docker.go:233] disabling docker service ...
	I1030 18:41:41.659940  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:41:41.675386  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:41:41.688521  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:41:41.830998  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:41:41.962743  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:41:41.976734  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:41:41.998554  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:41:41.998635  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.010835  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:41:42.010904  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.022771  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.033993  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.044518  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:41:42.055581  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.065838  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.082685  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.092911  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:41:42.102341  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:41:42.102398  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:41:42.115321  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:41:42.125073  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:42.255762  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:41:42.348340  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:41:42.348402  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:41:42.353645  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:41:42.353700  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:41:42.357362  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:41:42.403194  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:41:42.403278  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.433073  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.461144  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:41:42.462700  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:41:42.464361  400041 out.go:177]   - env NO_PROXY=192.168.39.141,192.168.39.67
	I1030 18:41:42.465724  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:42.468442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.468785  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:42.468812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.469009  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:41:42.473316  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:42.486401  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:41:42.486671  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:42.487004  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.487051  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.503315  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1030 18:41:42.503812  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.504381  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.504403  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.504715  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.504885  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:41:42.506310  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:42.506684  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.506729  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.521795  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I1030 18:41:42.522246  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.522834  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.522857  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.523225  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.523429  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:42.523593  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.238
	I1030 18:41:42.523605  400041 certs.go:194] generating shared ca certs ...
	I1030 18:41:42.523621  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.523781  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:41:42.523832  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:41:42.523846  400041 certs.go:256] generating profile certs ...
	I1030 18:41:42.523984  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:41:42.524022  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7
	I1030 18:41:42.524044  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.238 192.168.39.254]
	I1030 18:41:42.771082  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 ...
	I1030 18:41:42.771143  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7: {Name:mkbb8ab8bf6c18d6d6a31970e3b828800b8fd44f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771350  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 ...
	I1030 18:41:42.771369  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7: {Name:mk93a1175526096093ebe70ea08ba926787709bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771474  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:41:42.771640  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:41:42.771819  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:41:42.771839  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:41:42.771859  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:41:42.771878  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:41:42.771897  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:41:42.771916  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:41:42.771935  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:41:42.771953  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:41:42.786601  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:41:42.786716  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:41:42.786768  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:41:42.786783  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:41:42.786818  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:41:42.786855  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:41:42.786886  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:41:42.786944  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:42.786987  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:41:42.787011  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:42.787031  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:41:42.787082  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:42.790022  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790433  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:42.790463  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790635  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:42.790863  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:42.791005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:42.791117  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:42.862993  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:41:42.869116  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:41:42.881084  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:41:42.885608  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:41:42.896066  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:41:42.900395  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:41:42.911415  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:41:42.915680  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:41:42.926002  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:41:42.929978  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:41:42.939948  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:41:42.944073  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:41:42.954991  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:41:42.979919  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:41:43.004284  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:41:43.027671  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:41:43.050807  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1030 18:41:43.073405  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:41:43.097875  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:41:43.121491  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:41:43.145484  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:41:43.169567  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:41:43.194113  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:41:43.217839  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:41:43.235214  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:41:43.251678  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:41:43.267891  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:41:43.283793  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:41:43.301477  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:41:43.319112  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:41:43.336222  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:41:43.342021  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:41:43.353281  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357881  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357947  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.363573  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:41:43.375497  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:41:43.389049  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393551  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393616  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.399295  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:41:43.411090  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:41:43.422010  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426629  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426687  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.432334  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:41:43.443256  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:41:43.447278  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:41:43.447336  400041 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.2 crio true true} ...
	I1030 18:41:43.447423  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:41:43.447453  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:41:43.447481  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:41:43.463867  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:41:43.463938  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:41:43.463993  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.474999  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:41:43.475044  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.485456  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:41:43.485479  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1030 18:41:43.485533  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485545  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1030 18:41:43.485603  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485621  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:43.504131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504186  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:41:43.504223  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:41:43.504237  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:41:43.504267  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:41:43.522121  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:41:43.522169  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:41:44.375482  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:41:44.387138  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1030 18:41:44.405486  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:41:44.422728  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:41:44.439060  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:41:44.443074  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:44.455364  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:44.570256  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:41:44.588522  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:44.589080  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:44.589146  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:44.605625  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 18:41:44.606088  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:44.606626  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:44.606648  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:44.607023  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:44.607225  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:44.607369  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:41:44.607505  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:41:44.607526  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:44.610554  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611109  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:44.611135  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611433  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:44.611606  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:44.611760  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:44.611885  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:44.773784  400041 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:44.773850  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443"
	I1030 18:42:06.433926  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443": (21.660034767s)
	I1030 18:42:06.433968  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:42:06.995847  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m03 minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:42:07.135527  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:42:07.266435  400041 start.go:319] duration metric: took 22.659060991s to joinCluster
	I1030 18:42:07.266542  400041 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:42:07.266874  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:42:07.267989  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:42:07.269832  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:42:07.538532  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:42:07.566640  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:42:07.566990  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:42:07.567153  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:42:07.567517  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:07.567636  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:07.567647  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:07.567658  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:07.567663  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:07.571044  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.067840  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.067866  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.067875  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.067880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.071548  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.568423  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.568445  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.568456  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.568468  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.572275  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:09.068213  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.068244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.068255  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.068261  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.072412  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.568601  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.568687  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.568704  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.572953  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.573669  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:10.068646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.068674  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.068686  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.068690  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.072592  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:10.568186  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.568212  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.568228  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.568234  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.571345  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:11.068394  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.068419  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.068430  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.068435  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.071353  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:11.568540  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.568569  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.568581  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.568586  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.571615  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.068128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.068184  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.068198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.068204  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.072054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.072920  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:12.568764  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.568788  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.568799  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.568804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.572509  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:13.067810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.067840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.067852  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.067858  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.072370  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:13.568096  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.568118  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.568127  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.568130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.571713  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.068692  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.068715  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.068724  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.068728  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.072113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.073045  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:14.568414  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.568441  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.568458  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.568463  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.571979  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:15.067728  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.067752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.067760  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.067764  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.079108  400041 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1030 18:42:15.568483  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.568509  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.568518  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.568523  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.571981  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.067933  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.067953  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.067962  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.067965  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.071179  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.568646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.568671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.568684  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.568691  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.571923  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.572720  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:17.068520  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.068545  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.068561  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.068566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.072118  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:17.568073  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.568108  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.568118  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.568123  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.571265  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.068409  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.068434  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.068442  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.068447  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.071717  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.568497  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.568527  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.568540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.568546  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.571867  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.067827  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.067850  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.067859  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.067863  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.070951  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.071706  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:19.568087  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.568110  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.568119  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.568122  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.571495  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.068028  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.068053  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.068064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.068071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.071582  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.568136  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.568161  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.568169  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.568174  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.571551  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.068612  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.068640  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.068652  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.068657  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.072026  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.072659  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:21.568033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.568055  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.568064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.568069  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.571332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.067937  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.067961  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.067970  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.067976  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.071718  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.568117  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.568139  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.568147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.568155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.571493  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.068511  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.068548  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.068558  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.068562  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.071664  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.568675  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.568699  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.568707  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.571937  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.572572  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:24.067899  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.067922  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.067931  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.067934  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.071366  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:24.568317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.568342  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.568351  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.568355  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.571501  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.067773  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.067796  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.067803  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.067806  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.071344  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.568753  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.568775  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.568783  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.568787  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.572126  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.572899  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:26.068223  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.068246  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.068257  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.068262  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.072464  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:26.073313  400041 node_ready.go:49] node "ha-174833-m03" has status "Ready":"True"
	I1030 18:42:26.073333  400041 node_ready.go:38] duration metric: took 18.505796326s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:26.073343  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:26.073412  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:26.073421  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.073428  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.073435  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.079519  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:26.085610  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.085695  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:42:26.085704  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.085711  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.085715  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.088406  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.089109  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.089127  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.089137  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.089143  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.091504  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.092047  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.092069  400041 pod_ready.go:82] duration metric: took 6.435195ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092082  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092150  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:42:26.092160  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.092170  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.092179  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.095058  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.095704  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.095720  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.095730  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.095735  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.098085  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.098596  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.098614  400041 pod_ready.go:82] duration metric: took 6.524633ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098625  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098689  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:42:26.098701  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.098708  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.098714  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.101151  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.101737  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.101752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.101762  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.101769  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.103823  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.104381  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.104404  400041 pod_ready.go:82] duration metric: took 5.771643ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104417  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104487  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:42:26.104498  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.104507  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.104515  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.106840  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.107295  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:26.107308  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.107318  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.107325  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.109492  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.109917  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.109932  400041 pod_ready.go:82] duration metric: took 5.508285ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.109947  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.268296  400041 request.go:632] Waited for 158.281409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268393  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268404  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.268413  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.268419  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.272054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.469115  400041 request.go:632] Waited for 196.339916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469175  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469180  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.469190  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.469198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.472781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.473415  400041 pod_ready.go:93] pod "etcd-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.473441  400041 pod_ready.go:82] duration metric: took 363.484662ms for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.473458  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.668901  400041 request.go:632] Waited for 195.3359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669014  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.669026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.669034  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.672627  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.868738  400041 request.go:632] Waited for 195.360312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868832  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.868851  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.868860  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.872228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.872778  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.872812  400041 pod_ready.go:82] duration metric: took 399.338189ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.872828  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.068798  400041 request.go:632] Waited for 195.855457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068879  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068887  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.068898  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.068909  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.072321  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.269235  400041 request.go:632] Waited for 196.216042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269319  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.269343  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.269353  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.272769  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.273439  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.273459  400041 pod_ready.go:82] duration metric: took 400.623063ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.273469  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.468256  400041 request.go:632] Waited for 194.693367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468325  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.468338  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.468347  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.471734  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.669102  400041 request.go:632] Waited for 196.461533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669185  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669197  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.669208  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.669216  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.672818  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.673832  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.673854  400041 pod_ready.go:82] duration metric: took 400.378216ms for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.673876  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.868940  400041 request.go:632] Waited for 194.958773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869030  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869042  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.869053  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.869060  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.872180  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.068264  400041 request.go:632] Waited for 195.290526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068332  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068351  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.068362  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.068370  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.071658  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.072242  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.072265  400041 pod_ready.go:82] duration metric: took 398.381976ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.072276  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.268211  400041 request.go:632] Waited for 195.804533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268292  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268300  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.268311  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.268318  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.271496  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.468870  400041 request.go:632] Waited for 196.361357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468956  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468962  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.468977  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.468987  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.472341  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.472906  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.472925  400041 pod_ready.go:82] duration metric: took 400.642779ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.472940  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.669072  400041 request.go:632] Waited for 196.028852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669156  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669168  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.669179  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.669191  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.673097  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.868210  400041 request.go:632] Waited for 194.307626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868287  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868295  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.868307  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.868338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.871679  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.872327  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.872352  400041 pod_ready.go:82] duration metric: took 399.404321ms for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.872369  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.068267  400041 request.go:632] Waited for 195.816492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068356  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068367  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.068376  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.068388  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.072060  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.269102  400041 request.go:632] Waited for 196.354313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269167  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269172  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.269181  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.269186  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.273078  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.273532  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.273551  400041 pod_ready.go:82] duration metric: took 401.170636ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.273567  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.468616  400041 request.go:632] Waited for 194.925869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468712  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.468722  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.468730  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.472234  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.669266  400041 request.go:632] Waited for 196.242195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669324  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669331  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.669341  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.669348  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.673010  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.674076  400041 pod_ready.go:93] pod "kube-proxy-g7l7z" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.674097  400041 pod_ready.go:82] duration metric: took 400.523192ms for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.674108  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.869286  400041 request.go:632] Waited for 195.064443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869374  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869384  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.869393  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.869397  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.872765  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.068849  400041 request.go:632] Waited for 195.380036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068912  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068917  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.068926  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.068930  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.073076  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:30.073910  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.073931  400041 pod_ready.go:82] duration metric: took 399.816887ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.073942  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.269092  400041 request.go:632] Waited for 195.075688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269158  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269163  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.269171  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.269174  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.272728  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.468827  400041 request.go:632] Waited for 195.469933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468924  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468935  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.468944  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.468948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.472792  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.473256  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.473274  400041 pod_ready.go:82] duration metric: took 399.325616ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.473285  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.668281  400041 request.go:632] Waited for 194.899722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668360  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668369  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.668378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.668386  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.672074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.869270  400041 request.go:632] Waited for 196.355231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869340  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869345  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.869354  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.869361  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.873235  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.873666  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.873686  400041 pod_ready.go:82] duration metric: took 400.39483ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.873697  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.068802  400041 request.go:632] Waited for 195.002943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068869  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068875  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.068884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.068901  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.072579  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.268662  400041 request.go:632] Waited for 195.353177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268730  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268736  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.268743  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.268749  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.272045  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.272702  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:31.272721  400041 pod_ready.go:82] duration metric: took 399.01745ms for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.272733  400041 pod_ready.go:39] duration metric: took 5.199380679s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:31.272749  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:42:31.272802  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:42:31.290132  400041 api_server.go:72] duration metric: took 24.023548522s to wait for apiserver process to appear ...
	I1030 18:42:31.290159  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:42:31.290180  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:42:31.295173  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:42:31.295236  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:42:31.295244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.295252  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.295257  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.296242  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:42:31.296313  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:42:31.296329  400041 api_server.go:131] duration metric: took 6.164986ms to wait for apiserver health ...
	I1030 18:42:31.296336  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:42:31.468748  400041 request.go:632] Waited for 172.312716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468815  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.468822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.468826  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.475257  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:31.481661  400041 system_pods.go:59] 24 kube-system pods found
	I1030 18:42:31.481688  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.481693  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.481699  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.481705  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.481710  400041 system_pods.go:61] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.481715  400041 system_pods.go:61] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.481720  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.481728  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.481733  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.481740  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.481749  400041 system_pods.go:61] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.481754  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.481762  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.481768  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.481776  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.481781  400041 system_pods.go:61] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.481789  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.481794  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.481802  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.481807  400041 system_pods.go:61] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.481814  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.481819  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.481826  400041 system_pods.go:61] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.481832  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.481843  400041 system_pods.go:74] duration metric: took 185.498428ms to wait for pod list to return data ...
	I1030 18:42:31.481856  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:42:31.668606  400041 request.go:632] Waited for 186.6491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668666  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.668679  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.668682  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.672056  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.672194  400041 default_sa.go:45] found service account: "default"
	I1030 18:42:31.672209  400041 default_sa.go:55] duration metric: took 190.344386ms for default service account to be created ...
	I1030 18:42:31.672218  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:42:31.868735  400041 request.go:632] Waited for 196.405115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868808  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868814  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.868822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.868830  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.874347  400041 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 18:42:31.881436  400041 system_pods.go:86] 24 kube-system pods found
	I1030 18:42:31.881470  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.881477  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.881483  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.881487  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.881490  400041 system_pods.go:89] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.881496  400041 system_pods.go:89] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.881501  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.881507  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.881516  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.881521  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.881529  400041 system_pods.go:89] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.881538  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.881547  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.881551  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.881555  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.881559  400041 system_pods.go:89] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.881563  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.881568  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.881574  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.881580  400041 system_pods.go:89] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.881585  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.881589  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.881595  400041 system_pods.go:89] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.881600  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.881612  400041 system_pods.go:126] duration metric: took 209.387873ms to wait for k8s-apps to be running ...
	I1030 18:42:31.881626  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:42:31.881679  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:42:31.897108  400041 system_svc.go:56] duration metric: took 15.46981ms WaitForService to wait for kubelet
	I1030 18:42:31.897150  400041 kubeadm.go:582] duration metric: took 24.630565695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:42:31.897179  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:42:32.068632  400041 request.go:632] Waited for 171.354733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068708  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:32.068716  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:32.068721  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:32.073422  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:32.074348  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074387  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074400  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074404  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074408  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074412  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074421  400041 node_conditions.go:105] duration metric: took 177.235852ms to run NodePressure ...
	I1030 18:42:32.074439  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:42:32.074466  400041 start.go:255] writing updated cluster config ...
	I1030 18:42:32.074805  400041 ssh_runner.go:195] Run: rm -f paused
	I1030 18:42:32.127386  400041 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 18:42:32.129289  400041 out.go:177] * Done! kubectl is now configured to use "ha-174833" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.474838018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982474816083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a13a9e3-7f4b-44da-b9cb-6719e8a231f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.475524927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=752a6136-ffb7-4eed-a17c-fb1369324e82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.475594362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=752a6136-ffb7-4eed-a17c-fb1369324e82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.475876725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=752a6136-ffb7-4eed-a17c-fb1369324e82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.513571312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5575c0f0-0850-49e2-9649-31bd9f262916 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.513640165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5575c0f0-0850-49e2-9649-31bd9f262916 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.514471091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a01e53a-d3e9-4b0a-89d7-89592e8b94ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.514867345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982514849850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a01e53a-d3e9-4b0a-89d7-89592e8b94ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.515472041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5070a315-fff8-444e-8e01-7f94ac34eb31 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.515525215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5070a315-fff8-444e-8e01-7f94ac34eb31 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.515751287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5070a315-fff8-444e-8e01-7f94ac34eb31 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.554336218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5cbb446-6584-4e6e-99ea-53bfe280aa2b name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.554447182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5cbb446-6584-4e6e-99ea-53bfe280aa2b name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.555582303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=637e67bc-1a92-4f57-bb38-18fa8f18ae18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.555997772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982555976277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=637e67bc-1a92-4f57-bb38-18fa8f18ae18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.556579949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d0392dc-07e4-4b38-864b-14ec25e7f63a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.556633004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d0392dc-07e4-4b38-864b-14ec25e7f63a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.556836343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d0392dc-07e4-4b38-864b-14ec25e7f63a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.595138379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36455e8e-d774-4161-bd32-972f0b2bf4cd name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.595254111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36455e8e-d774-4161-bd32-972f0b2bf4cd name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.596399368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcb65842-52e6-48b1-be6a-08a132df61d2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.596807508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982596785268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcb65842-52e6-48b1-be6a-08a132df61d2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.597377741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4e86f9f-b4f4-4ed7-a3c1-32ff711f09da name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.597440245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4e86f9f-b4f4-4ed7-a3c1-32ff711f09da name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:22 ha-174833 crio[664]: time="2024-10-30 18:46:22.597649926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4e86f9f-b4f4-4ed7-a3c1-32ff711f09da name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b50f8293a0eac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   4b32508187fed       coredns-7c65d6cfc9-tnj67
	b6694cd6bc9e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     6 minutes ago       Running             storage-provisioner       0                   e4daca50f6e1c       storage-provisioner
	80919506252b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   80f0d2bac7bdb       coredns-7c65d6cfc9-qrkkc
	46301d1401a14       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16   6 minutes ago       Running             kindnet-cni               0                   4a4a82673e78f       kindnet-pm48g
	634060e657ba2       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                     6 minutes ago       Running             kube-proxy                0                   5d414abeb9a8e       kube-proxy-2qt2n
	da8b9126272c4       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215    6 minutes ago       Running             kube-vip                  0                   635aa65f78ff8       kube-vip-ha-174833
	6f0fb508f1f86       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                     6 minutes ago       Running             kube-scheduler            0                   2a80897d4d698       kube-scheduler-ha-174833
	db863ebdc17e0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                     6 minutes ago       Running             kube-controller-manager   0                   bc13396acc704       kube-controller-manager-ha-174833
	381be95e92ca6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     6 minutes ago       Running             etcd                      0                   aa574b692710d       etcd-ha-174833
	661ed7108dbf5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                     6 minutes ago       Running             kube-apiserver            0                   a4e686c5a4e05       kube-apiserver-ha-174833
	
	
	==> coredns [80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f] <==
	[INFO] 10.244.2.2:49872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260615s
	[INFO] 10.244.2.2:45985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000215389s
	[INFO] 10.244.1.3:58699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184263s
	[INFO] 10.244.1.3:36745 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223993s
	[INFO] 10.244.1.3:52696 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197445s
	[INFO] 10.244.1.3:51136 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008496656s
	[INFO] 10.244.1.3:37326 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170193s
	[INFO] 10.244.2.2:41356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001504514s
	[INFO] 10.244.2.2:58448 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121598s
	[INFO] 10.244.2.2:57683 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115706s
	[INFO] 10.244.1.2:44356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773314s
	[INFO] 10.244.1.2:53338 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092182s
	[INFO] 10.244.1.2:36505 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123936s
	[INFO] 10.244.1.2:50770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129391s
	[INFO] 10.244.1.3:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119608s
	[INFO] 10.244.1.3:38056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104793s
	[INFO] 10.244.2.2:56050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001014289s
	[INFO] 10.244.2.2:46354 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094957s
	[INFO] 10.244.1.2:43247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140652s
	[INFO] 10.244.1.3:59260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286102s
	[INFO] 10.244.1.3:42613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177355s
	[INFO] 10.244.2.2:38778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139553s
	[INFO] 10.244.2.2:55445 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162449s
	[INFO] 10.244.1.2:49123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103971s
	[INFO] 10.244.1.2:36025 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103655s
	
	
	==> coredns [b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009] <==
	[INFO] 10.244.1.3:35936 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006730126s
	[INFO] 10.244.1.3:52049 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164529s
	[INFO] 10.244.1.3:41429 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145894s
	[INFO] 10.244.2.2:38865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015631s
	[INFO] 10.244.2.2:35468 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001359248s
	[INFO] 10.244.2.2:39539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154504s
	[INFO] 10.244.2.2:40996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012336s
	[INFO] 10.244.2.2:36394 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103847s
	[INFO] 10.244.1.2:36748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157155s
	[INFO] 10.244.1.2:57168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183772s
	[INFO] 10.244.1.2:44765 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001208743s
	[INFO] 10.244.1.2:51648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094986s
	[INFO] 10.244.1.3:35468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117052s
	[INFO] 10.244.1.3:41666 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093918s
	[INFO] 10.244.2.2:40566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179128s
	[INFO] 10.244.2.2:35306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086624s
	[INFO] 10.244.1.2:54037 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136664s
	[INFO] 10.244.1.2:39370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109182s
	[INFO] 10.244.1.2:41814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123818s
	[INFO] 10.244.1.3:44728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170139s
	[INFO] 10.244.1.3:56805 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142203s
	[INFO] 10.244.2.2:36863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187523s
	[INFO] 10.244.2.2:41661 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120093s
	[INFO] 10.244.1.2:52634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137066s
	[INFO] 10.244.1.2:35418 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120994s
	
	
	==> describe nodes <==
	Name:               ha-174833
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:40:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    ha-174833
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ccc5c9f42c54438b6652723644bbeef
	  System UUID:                7ccc5c9f-42c5-4438-b665-2723644bbeef
	  Boot ID:                    83dbe7e6-9d54-44c7-aa42-e17dc8d9a1a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-qrkkc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 coredns-7c65d6cfc9-tnj67             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 etcd-ha-174833                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-pm48g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m26s
	  kube-system                 kube-apiserver-ha-174833             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-ha-174833    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-2qt2n                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-scheduler-ha-174833             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-vip-ha-174833                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m37s)  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m37s)  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m37s)  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s                  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s                  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s                  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  NodeReady                6m8s                   kubelet          Node ha-174833 status is now: NodeReady
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	
	
	Name:               ha-174833-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:40:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:43:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-174833-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44df5dbbd2d444bb8a426278602ee677
	  System UUID:                44df5dbb-d2d4-44bb-8a42-6278602ee677
	  Boot ID:                    360af464-681d-4348-b7f8-dd08e7d88924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mm586                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  default                     busybox-7dff88458-v6kn9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-174833-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m36s
	  kube-system                 kindnet-rlzbn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m38s
	  kube-system                 kube-apiserver-ha-174833-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-ha-174833-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-proxy-hg2st                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-scheduler-ha-174833-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-vip-ha-174833-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node ha-174833-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s (x7 over 5m38s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  NodeNotReady             112s                   node-controller  Node ha-174833-m02 status is now: NodeNotReady
	
	
	Name:               ha-174833-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:42:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-174833-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a25aeed7bbc4bd4a357771ce914b28b
	  System UUID:                8a25aeed-7bbc-4bd4-a357-771ce914b28b
	  Boot ID:                    3552b03e-4535-4240-8adc-99b111c48f7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rzbbm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-174833-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kindnet-b76pd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-174833-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-174833-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-g7l7z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-174833-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-174833-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-174833-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	
	
	Name:               ha-174833-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_43_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:43:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-174833-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 65b27c1ce02d45b78ed3fcddd1aae236
	  System UUID:                65b27c1c-e02d-45b7-8ed3-fcddd1aae236
	  Boot ID:                    25699951-947c-4e74-aa23-b7f7f9d75023
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2dhq5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m9s
	  kube-system                 kube-proxy-nzl42    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m4s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     3m9s                 cidrAllocator    Node ha-174833-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m9s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m9s)  kubelet          Node ha-174833-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m9s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-174833-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct30 18:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050141] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040202] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.508080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580074] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.619811] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059036] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050086] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.189200] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.106863] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.256172] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.944359] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.089078] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.056939] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.232740] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.917340] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +5.757118] kauditd_printk_skb: 23 callbacks suppressed
	[Oct30 18:40] kauditd_printk_skb: 32 callbacks suppressed
	[ +47.325044] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c] <==
	{"level":"warn","ts":"2024-10-30T18:46:22.850682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.859022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.862763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.870621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.872159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.878824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.885976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.890763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.894265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.899733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.906656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.929987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.938447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.947455Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.962469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.972286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.979439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:22.987760Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:23.001092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:23.004834Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:23.008046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:23.014164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:23.020183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:23.053033Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:23.072285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:46:23 up 7 min,  0 users,  load average: 0.21, 0.35, 0.20
	Linux ha-174833 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef] <==
	I1030 18:45:44.313971       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:45:54.322170       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:45:54.322244       1 main.go:301] handling current node
	I1030 18:45:54.322259       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:45:54.322265       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:45:54.322528       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:45:54.322552       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:45:54.322662       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:45:54.322683       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:04.313396       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:04.313498       1 main.go:301] handling current node
	I1030 18:46:04.313526       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:04.313545       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:04.313781       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:04.313810       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:04.313989       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:04.314019       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:14.313413       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:14.313476       1 main.go:301] handling current node
	I1030 18:46:14.313504       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:14.313513       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:14.313806       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:14.313832       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:14.314013       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:14.314036       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb] <==
	I1030 18:39:50.264612       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 18:39:50.401162       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1030 18:39:50.407669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.141]
	I1030 18:39:50.408487       1 controller.go:615] quota admission added evaluator for: endpoints
	I1030 18:39:50.417171       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 18:39:50.434785       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1030 18:39:51.992504       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1030 18:39:52.038007       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1030 18:39:52.050097       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1030 18:39:55.887886       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1030 18:39:56.039666       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1030 18:42:42.298130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41446: use of closed network connection
	E1030 18:42:42.500141       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41460: use of closed network connection
	E1030 18:42:42.681190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41478: use of closed network connection
	E1030 18:42:42.876163       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41496: use of closed network connection
	E1030 18:42:43.053880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41524: use of closed network connection
	E1030 18:42:43.422726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41570: use of closed network connection
	E1030 18:42:43.605703       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41578: use of closed network connection
	E1030 18:42:43.785641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41594: use of closed network connection
	E1030 18:42:44.079143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41622: use of closed network connection
	E1030 18:42:44.278108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41630: use of closed network connection
	E1030 18:42:44.464009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41654: use of closed network connection
	E1030 18:42:44.647039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41670: use of closed network connection
	E1030 18:42:44.825565       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41686: use of closed network connection
	E1030 18:42:45.007583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41704: use of closed network connection
	
	
	==> kube-controller-manager [db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73] <==
	I1030 18:43:14.768963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:14.886660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.225099       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174833-m04"
	I1030 18:43:15.270413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.350905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.242429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.306242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.754966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.845608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:24.906507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.742819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.743714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:43:35.758129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:37.268796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:45.220918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:44:30.252088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.252535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:44:30.280327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.294546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.947854ms"
	I1030 18:44:30.294861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.928µs"
	I1030 18:44:30.441730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.437828ms"
	I1030 18:44:30.442971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="183.461µs"
	I1030 18:44:32.399995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:35.500584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:45:28.632096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833"
	
	
	==> kube-proxy [634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 18:39:57.657528       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 18:39:57.672099       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1030 18:39:57.672270       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 18:39:57.707431       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 18:39:57.707476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 18:39:57.707498       1 server_linux.go:169] "Using iptables Proxier"
	I1030 18:39:57.710062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 18:39:57.710384       1 server.go:483] "Version info" version="v1.31.2"
	I1030 18:39:57.710412       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 18:39:57.711719       1 config.go:199] "Starting service config controller"
	I1030 18:39:57.711756       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 18:39:57.711783       1 config.go:105] "Starting endpoint slice config controller"
	I1030 18:39:57.711787       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 18:39:57.712612       1 config.go:328] "Starting node config controller"
	I1030 18:39:57.712701       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 18:39:57.812186       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 18:39:57.812427       1 shared_informer.go:320] Caches are synced for service config
	I1030 18:39:57.813054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6] <==
	W1030 18:39:49.816172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 18:39:49.816268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.949917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 18:39:49.949971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.991072       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 18:39:49.991150       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1030 18:39:52.691806       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1030 18:42:33.022088       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mm586" node="ha-174833-m03"
	E1030 18:42:33.022366       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" pod="default/busybox-7dff88458-mm586"
	E1030 18:43:14.801891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.807808       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3291acf1-7798-4998-95fd-5094835e017f(kube-system/kube-proxy-nzl42) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nzl42"
	E1030 18:43:14.807930       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-nzl42"
	I1030 18:43:14.809848       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.810858       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.814494       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3144d47c-0cef-414b-b657-6a3c10ada751(kube-system/kindnet-ptwbp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ptwbp"
	E1030 18:43:14.814760       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-ptwbp"
	I1030 18:43:14.814869       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.859158       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.859832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51293c2a-e424-4d2b-a692-1d8df3e4eb88(kube-system/kube-proxy-vp4bf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vp4bf"
	E1030 18:43:14.860153       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-vp4bf"
	I1030 18:43:14.860458       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.864834       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	E1030 18:43:14.866342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3cf9c20d-84c1-4bd6-8f34-453bee8cc673(kube-system/kindnet-dsxh6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dsxh6"
	E1030 18:43:14.866529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-dsxh6"
	I1030 18:43:14.866552       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	
	
	==> kubelet <==
	Oct 30 18:44:52 ha-174833 kubelet[1302]: E1030 18:44:52.044104    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313892043714010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:44:52 ha-174833 kubelet[1302]: E1030 18:44:52.044143    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313892043714010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047183    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047499    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.048946    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.049303    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050794    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050834    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053552    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053658    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.055784    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.056077    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:51 ha-174833 kubelet[1302]: E1030 18:45:51.922951    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058449    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058518    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060855    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060895    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062294    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062632    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:22 ha-174833 kubelet[1302]: E1030 18:46:22.064946    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982064558351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:22 ha-174833 kubelet[1302]: E1030 18:46:22.064979    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982064558351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174833 -n ha-174833
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174833 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr: (3.937552806s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174833 -n ha-174833
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 logs -n 25: (1.352283359s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m03_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m04 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp testdata/cp-test.txt                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m04_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03:/home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m03 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174833 node stop m02 -v=7                                                     | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-174833 node start m02 -v=7                                                    | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:39:13
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:39:13.284465  400041 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:39:13.284583  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284591  400041 out.go:358] Setting ErrFile to fd 2...
	I1030 18:39:13.284596  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284767  400041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:39:13.285341  400041 out.go:352] Setting JSON to false
	I1030 18:39:13.286279  400041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8496,"bootTime":1730305057,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:39:13.286383  400041 start.go:139] virtualization: kvm guest
	I1030 18:39:13.288640  400041 out.go:177] * [ha-174833] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:39:13.290653  400041 notify.go:220] Checking for updates...
	I1030 18:39:13.290717  400041 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:39:13.292349  400041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:39:13.293858  400041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:13.295309  400041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.296710  400041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:39:13.298107  400041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:39:13.299548  400041 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:39:13.333903  400041 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 18:39:13.335174  400041 start.go:297] selected driver: kvm2
	I1030 18:39:13.335194  400041 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:39:13.335206  400041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:39:13.335896  400041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.336007  400041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:39:13.350868  400041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:39:13.350946  400041 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:39:13.351232  400041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:39:13.351271  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:13.351324  400041 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1030 18:39:13.351332  400041 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 18:39:13.351398  400041 start.go:340] cluster config:
	{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:13.351547  400041 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.353340  400041 out.go:177] * Starting "ha-174833" primary control-plane node in "ha-174833" cluster
	I1030 18:39:13.354531  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:13.354568  400041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:39:13.354580  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:13.354663  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:13.354676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:13.355016  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:13.355043  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json: {Name:mkc5b46cd8e85bcdd2d75c56d8807d384c7babe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:13.355179  400041 start.go:360] acquireMachinesLock for ha-174833: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:13.355220  400041 start.go:364] duration metric: took 25.55µs to acquireMachinesLock for "ha-174833"
	I1030 18:39:13.355242  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:13.355302  400041 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 18:39:13.356866  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:13.357003  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:13.357058  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:13.371132  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I1030 18:39:13.371590  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:13.372159  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:13.372180  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:13.372504  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:13.372689  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:13.372808  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:13.372956  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:13.372989  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:13.373021  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:13.373056  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373078  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373144  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:13.373168  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373183  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373207  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:13.373219  400041 main.go:141] libmachine: (ha-174833) Calling .PreCreateCheck
	I1030 18:39:13.373569  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:13.373996  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:13.374012  400041 main.go:141] libmachine: (ha-174833) Calling .Create
	I1030 18:39:13.374145  400041 main.go:141] libmachine: (ha-174833) Creating KVM machine...
	I1030 18:39:13.375320  400041 main.go:141] libmachine: (ha-174833) DBG | found existing default KVM network
	I1030 18:39:13.375998  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.375838  400064 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1030 18:39:13.376021  400041 main.go:141] libmachine: (ha-174833) DBG | created network xml: 
	I1030 18:39:13.376034  400041 main.go:141] libmachine: (ha-174833) DBG | <network>
	I1030 18:39:13.376048  400041 main.go:141] libmachine: (ha-174833) DBG |   <name>mk-ha-174833</name>
	I1030 18:39:13.376057  400041 main.go:141] libmachine: (ha-174833) DBG |   <dns enable='no'/>
	I1030 18:39:13.376066  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376076  400041 main.go:141] libmachine: (ha-174833) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1030 18:39:13.376085  400041 main.go:141] libmachine: (ha-174833) DBG |     <dhcp>
	I1030 18:39:13.376097  400041 main.go:141] libmachine: (ha-174833) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1030 18:39:13.376112  400041 main.go:141] libmachine: (ha-174833) DBG |     </dhcp>
	I1030 18:39:13.376121  400041 main.go:141] libmachine: (ha-174833) DBG |   </ip>
	I1030 18:39:13.376134  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376145  400041 main.go:141] libmachine: (ha-174833) DBG | </network>
	I1030 18:39:13.376153  400041 main.go:141] libmachine: (ha-174833) DBG | 
	I1030 18:39:13.380994  400041 main.go:141] libmachine: (ha-174833) DBG | trying to create private KVM network mk-ha-174833 192.168.39.0/24...
	I1030 18:39:13.444397  400041 main.go:141] libmachine: (ha-174833) DBG | private KVM network mk-ha-174833 192.168.39.0/24 created
	I1030 18:39:13.444439  400041 main.go:141] libmachine: (ha-174833) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.444454  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.444367  400064 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.444474  400041 main.go:141] libmachine: (ha-174833) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:13.444565  400041 main.go:141] libmachine: (ha-174833) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:13.725521  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.725350  400064 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa...
	I1030 18:39:13.832228  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832066  400064 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk...
	I1030 18:39:13.832262  400041 main.go:141] libmachine: (ha-174833) DBG | Writing magic tar header
	I1030 18:39:13.832279  400041 main.go:141] libmachine: (ha-174833) DBG | Writing SSH key tar header
	I1030 18:39:13.832291  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832203  400064 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.832302  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833
	I1030 18:39:13.832373  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 (perms=drwx------)
	I1030 18:39:13.832401  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:13.832414  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:13.832431  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.832442  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:13.832452  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:13.832462  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:13.832473  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:13.832490  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:13.832506  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home
	I1030 18:39:13.832517  400041 main.go:141] libmachine: (ha-174833) DBG | Skipping /home - not owner
	I1030 18:39:13.832528  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:13.832538  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:13.832550  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:13.833717  400041 main.go:141] libmachine: (ha-174833) define libvirt domain using xml: 
	I1030 18:39:13.833738  400041 main.go:141] libmachine: (ha-174833) <domain type='kvm'>
	I1030 18:39:13.833744  400041 main.go:141] libmachine: (ha-174833)   <name>ha-174833</name>
	I1030 18:39:13.833752  400041 main.go:141] libmachine: (ha-174833)   <memory unit='MiB'>2200</memory>
	I1030 18:39:13.833758  400041 main.go:141] libmachine: (ha-174833)   <vcpu>2</vcpu>
	I1030 18:39:13.833762  400041 main.go:141] libmachine: (ha-174833)   <features>
	I1030 18:39:13.833766  400041 main.go:141] libmachine: (ha-174833)     <acpi/>
	I1030 18:39:13.833770  400041 main.go:141] libmachine: (ha-174833)     <apic/>
	I1030 18:39:13.833774  400041 main.go:141] libmachine: (ha-174833)     <pae/>
	I1030 18:39:13.833794  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.833807  400041 main.go:141] libmachine: (ha-174833)   </features>
	I1030 18:39:13.833814  400041 main.go:141] libmachine: (ha-174833)   <cpu mode='host-passthrough'>
	I1030 18:39:13.833838  400041 main.go:141] libmachine: (ha-174833)   
	I1030 18:39:13.833857  400041 main.go:141] libmachine: (ha-174833)   </cpu>
	I1030 18:39:13.833863  400041 main.go:141] libmachine: (ha-174833)   <os>
	I1030 18:39:13.833868  400041 main.go:141] libmachine: (ha-174833)     <type>hvm</type>
	I1030 18:39:13.833884  400041 main.go:141] libmachine: (ha-174833)     <boot dev='cdrom'/>
	I1030 18:39:13.833888  400041 main.go:141] libmachine: (ha-174833)     <boot dev='hd'/>
	I1030 18:39:13.833904  400041 main.go:141] libmachine: (ha-174833)     <bootmenu enable='no'/>
	I1030 18:39:13.833912  400041 main.go:141] libmachine: (ha-174833)   </os>
	I1030 18:39:13.833917  400041 main.go:141] libmachine: (ha-174833)   <devices>
	I1030 18:39:13.833922  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='cdrom'>
	I1030 18:39:13.834007  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/boot2docker.iso'/>
	I1030 18:39:13.834043  400041 main.go:141] libmachine: (ha-174833)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:13.834066  400041 main.go:141] libmachine: (ha-174833)       <readonly/>
	I1030 18:39:13.834080  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834092  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='disk'>
	I1030 18:39:13.834107  400041 main.go:141] libmachine: (ha-174833)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:13.834134  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk'/>
	I1030 18:39:13.834146  400041 main.go:141] libmachine: (ha-174833)       <target dev='hda' bus='virtio'/>
	I1030 18:39:13.834163  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834179  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834191  400041 main.go:141] libmachine: (ha-174833)       <source network='mk-ha-174833'/>
	I1030 18:39:13.834199  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834204  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834213  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834219  400041 main.go:141] libmachine: (ha-174833)       <source network='default'/>
	I1030 18:39:13.834228  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834233  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834244  400041 main.go:141] libmachine: (ha-174833)     <serial type='pty'>
	I1030 18:39:13.834261  400041 main.go:141] libmachine: (ha-174833)       <target port='0'/>
	I1030 18:39:13.834275  400041 main.go:141] libmachine: (ha-174833)     </serial>
	I1030 18:39:13.834287  400041 main.go:141] libmachine: (ha-174833)     <console type='pty'>
	I1030 18:39:13.834295  400041 main.go:141] libmachine: (ha-174833)       <target type='serial' port='0'/>
	I1030 18:39:13.834310  400041 main.go:141] libmachine: (ha-174833)     </console>
	I1030 18:39:13.834320  400041 main.go:141] libmachine: (ha-174833)     <rng model='virtio'>
	I1030 18:39:13.834333  400041 main.go:141] libmachine: (ha-174833)       <backend model='random'>/dev/random</backend>
	I1030 18:39:13.834342  400041 main.go:141] libmachine: (ha-174833)     </rng>
	I1030 18:39:13.834351  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834359  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834368  400041 main.go:141] libmachine: (ha-174833)   </devices>
	I1030 18:39:13.834377  400041 main.go:141] libmachine: (ha-174833) </domain>
	I1030 18:39:13.834388  400041 main.go:141] libmachine: (ha-174833) 
	I1030 18:39:13.838852  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:67:40:5d in network default
	I1030 18:39:13.839421  400041 main.go:141] libmachine: (ha-174833) Ensuring networks are active...
	I1030 18:39:13.839441  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:13.840041  400041 main.go:141] libmachine: (ha-174833) Ensuring network default is active
	I1030 18:39:13.840342  400041 main.go:141] libmachine: (ha-174833) Ensuring network mk-ha-174833 is active
	I1030 18:39:13.840783  400041 main.go:141] libmachine: (ha-174833) Getting domain xml...
	I1030 18:39:13.841490  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:15.028258  400041 main.go:141] libmachine: (ha-174833) Waiting to get IP...
	I1030 18:39:15.029201  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.029564  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.029614  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.029561  400064 retry.go:31] will retry after 241.896456ms: waiting for machine to come up
	I1030 18:39:15.272995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.273461  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.273488  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.273413  400064 retry.go:31] will retry after 260.838664ms: waiting for machine to come up
	I1030 18:39:15.535845  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.536295  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.536316  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.536255  400064 retry.go:31] will retry after 479.733534ms: waiting for machine to come up
	I1030 18:39:16.017897  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.018269  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.018294  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.018228  400064 retry.go:31] will retry after 392.371571ms: waiting for machine to come up
	I1030 18:39:16.412626  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.413050  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.413080  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.412991  400064 retry.go:31] will retry after 692.689396ms: waiting for machine to come up
	I1030 18:39:17.106954  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.107478  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.107955  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.107422  400064 retry.go:31] will retry after 832.987361ms: waiting for machine to come up
	I1030 18:39:17.942300  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.942709  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.942756  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.942670  400064 retry.go:31] will retry after 1.191938703s: waiting for machine to come up
	I1030 18:39:19.135752  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:19.136105  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:19.136132  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:19.136082  400064 retry.go:31] will retry after 978.475739ms: waiting for machine to come up
	I1030 18:39:20.116239  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:20.116734  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:20.116762  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:20.116673  400064 retry.go:31] will retry after 1.671512667s: waiting for machine to come up
	I1030 18:39:21.790628  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:21.791129  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:21.791157  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:21.791069  400064 retry.go:31] will retry after 2.145808112s: waiting for machine to come up
	I1030 18:39:23.938308  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:23.938724  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:23.938750  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:23.938677  400064 retry.go:31] will retry after 2.206607406s: waiting for machine to come up
	I1030 18:39:26.148104  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:26.148464  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:26.148498  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:26.148437  400064 retry.go:31] will retry after 3.57155807s: waiting for machine to come up
	I1030 18:39:29.721895  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:29.722283  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:29.722306  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:29.722235  400064 retry.go:31] will retry after 4.087469223s: waiting for machine to come up
	I1030 18:39:33.811039  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811489  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has current primary IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811515  400041 main.go:141] libmachine: (ha-174833) Found IP for machine: 192.168.39.141
	I1030 18:39:33.811524  400041 main.go:141] libmachine: (ha-174833) Reserving static IP address...
	I1030 18:39:33.811821  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find host DHCP lease matching {name: "ha-174833", mac: "52:54:00:fd:5e:ca", ip: "192.168.39.141"} in network mk-ha-174833
	I1030 18:39:33.884143  400041 main.go:141] libmachine: (ha-174833) Reserved static IP address: 192.168.39.141
	I1030 18:39:33.884173  400041 main.go:141] libmachine: (ha-174833) DBG | Getting to WaitForSSH function...
	I1030 18:39:33.884180  400041 main.go:141] libmachine: (ha-174833) Waiting for SSH to be available...
	I1030 18:39:33.886594  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.886971  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:33.886995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.887140  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH client type: external
	I1030 18:39:33.887229  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa (-rw-------)
	I1030 18:39:33.887264  400041 main.go:141] libmachine: (ha-174833) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:39:33.887276  400041 main.go:141] libmachine: (ha-174833) DBG | About to run SSH command:
	I1030 18:39:33.887284  400041 main.go:141] libmachine: (ha-174833) DBG | exit 0
	I1030 18:39:34.010284  400041 main.go:141] libmachine: (ha-174833) DBG | SSH cmd err, output: <nil>: 
	I1030 18:39:34.010612  400041 main.go:141] libmachine: (ha-174833) KVM machine creation complete!
	I1030 18:39:34.010940  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:34.011543  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011721  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011891  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:39:34.011905  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:34.013168  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:39:34.013181  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:39:34.013186  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:39:34.013192  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.015485  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015842  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.015874  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015997  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.016168  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016323  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016452  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.016738  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.016961  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.016974  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:39:34.117708  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.117732  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:39:34.117739  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.120384  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120816  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.120860  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120990  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.121177  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121322  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121422  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.121534  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.121721  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.121734  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:39:34.222936  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:39:34.223027  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:39:34.223040  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:39:34.223052  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223321  400041 buildroot.go:166] provisioning hostname "ha-174833"
	I1030 18:39:34.223356  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223546  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.225998  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226300  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.226323  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226503  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.226662  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226803  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226914  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.227040  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.227266  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.227279  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833 && echo "ha-174833" | sudo tee /etc/hostname
	I1030 18:39:34.340995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:39:34.341029  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.343841  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344138  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.344167  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344368  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.344558  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344679  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344790  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.344900  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.345070  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.345090  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:39:34.455073  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.455103  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:39:34.455126  400041 buildroot.go:174] setting up certificates
	I1030 18:39:34.455146  400041 provision.go:84] configureAuth start
	I1030 18:39:34.455156  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.455453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:34.458160  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458507  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.458546  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458737  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.461111  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461454  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.461482  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461548  400041 provision.go:143] copyHostCerts
	I1030 18:39:34.461581  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461633  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:39:34.461648  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461713  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:39:34.461793  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461811  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:39:34.461816  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461840  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:39:34.461880  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461896  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:39:34.461902  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461922  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:39:34.461968  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833 san=[127.0.0.1 192.168.39.141 ha-174833 localhost minikube]
	I1030 18:39:34.715502  400041 provision.go:177] copyRemoteCerts
	I1030 18:39:34.715567  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:39:34.715593  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.718337  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718724  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.718750  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.719124  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.719316  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.719438  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:34.802134  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:39:34.802247  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:39:34.830405  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:39:34.830495  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:39:34.853312  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:39:34.853400  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1030 18:39:34.876622  400041 provision.go:87] duration metric: took 421.460858ms to configureAuth
	I1030 18:39:34.876654  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:39:34.876860  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:34.876973  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.879465  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.879875  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.879918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.880033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.880249  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880401  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880547  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.880711  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.880901  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.880922  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:39:35.107739  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:39:35.107767  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:39:35.107789  400041 main.go:141] libmachine: (ha-174833) Calling .GetURL
	I1030 18:39:35.109044  400041 main.go:141] libmachine: (ha-174833) DBG | Using libvirt version 6000000
	I1030 18:39:35.111179  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111531  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.111555  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111678  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:39:35.111690  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:39:35.111698  400041 client.go:171] duration metric: took 21.738698891s to LocalClient.Create
	I1030 18:39:35.111719  400041 start.go:167] duration metric: took 21.738765345s to libmachine.API.Create "ha-174833"
	I1030 18:39:35.111730  400041 start.go:293] postStartSetup for "ha-174833" (driver="kvm2")
	I1030 18:39:35.111740  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:39:35.111756  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.111994  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:39:35.112024  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.114247  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114535  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.114564  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114645  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.114802  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.114905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.115037  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.197105  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:39:35.201419  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:39:35.201446  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:39:35.201521  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:39:35.201638  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:39:35.201653  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:39:35.201776  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:39:35.211530  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:35.234121  400041 start.go:296] duration metric: took 122.377861ms for postStartSetup
	I1030 18:39:35.234182  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:35.234814  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.237333  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237649  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.237675  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237930  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:35.238105  400041 start.go:128] duration metric: took 21.882791468s to createHost
	I1030 18:39:35.238129  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.240449  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240793  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.240819  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240925  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.241105  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241241  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241360  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.241504  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:35.241675  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:35.241684  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:39:35.343143  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313575.316321849
	
	I1030 18:39:35.343172  400041 fix.go:216] guest clock: 1730313575.316321849
	I1030 18:39:35.343179  400041 fix.go:229] Guest: 2024-10-30 18:39:35.316321849 +0000 UTC Remote: 2024-10-30 18:39:35.238116722 +0000 UTC m=+21.992904276 (delta=78.205127ms)
	I1030 18:39:35.343224  400041 fix.go:200] guest clock delta is within tolerance: 78.205127ms
	I1030 18:39:35.343236  400041 start.go:83] releasing machines lock for "ha-174833", held for 21.988006549s
	I1030 18:39:35.343264  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.343537  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.345918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346202  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.346227  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346384  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.346845  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347029  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347110  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:39:35.347154  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.347263  400041 ssh_runner.go:195] Run: cat /version.json
	I1030 18:39:35.347290  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.349953  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350154  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350349  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350372  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350476  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350518  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350532  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350712  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.350796  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350983  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.351121  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.351158  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351287  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.446752  400041 ssh_runner.go:195] Run: systemctl --version
	I1030 18:39:35.452799  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:39:35.607404  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:39:35.613689  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:39:35.613765  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:39:35.629322  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:39:35.629356  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:39:35.629426  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:39:35.645369  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:39:35.659484  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:39:35.659560  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:39:35.673617  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:39:35.686829  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:39:35.798982  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:39:35.961093  400041 docker.go:233] disabling docker service ...
	I1030 18:39:35.961203  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:39:35.975451  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:39:35.987814  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:39:36.096019  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:39:36.200364  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:39:36.213767  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:39:36.231649  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:39:36.231720  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.241504  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:39:36.241612  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.251200  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.260995  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.270677  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:39:36.280585  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.290337  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.306289  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.316095  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:39:36.325059  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:39:36.325116  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:39:36.338276  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:39:36.347428  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:36.458431  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:39:36.549399  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:39:36.549481  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:39:36.554177  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:39:36.554235  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:39:36.557819  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:39:36.597751  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:39:36.597863  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.625326  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.656926  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:39:36.658453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:36.661076  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661520  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:36.661551  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661753  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:39:36.665623  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:36.678283  400041 kubeadm.go:883] updating cluster {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:39:36.678415  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:36.678476  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:36.710390  400041 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 18:39:36.710476  400041 ssh_runner.go:195] Run: which lz4
	I1030 18:39:36.714335  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1030 18:39:36.714421  400041 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 18:39:36.718401  400041 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 18:39:36.718426  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 18:39:37.991420  400041 crio.go:462] duration metric: took 1.277020496s to copy over tarball
	I1030 18:39:37.991500  400041 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 18:39:40.058678  400041 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.067148582s)
	I1030 18:39:40.058707  400041 crio.go:469] duration metric: took 2.067258506s to extract the tarball
	I1030 18:39:40.058717  400041 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 18:39:40.095680  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:40.139024  400041 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:39:40.139051  400041 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:39:40.139060  400041 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.2 crio true true} ...
	I1030 18:39:40.139194  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:39:40.139268  400041 ssh_runner.go:195] Run: crio config
	I1030 18:39:40.182736  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:40.182762  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:40.182776  400041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:39:40.182809  400041 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174833 NodeName:ha-174833 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:39:40.182965  400041 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174833"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:39:40.182991  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:39:40.183041  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:39:40.198922  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:39:40.199067  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:39:40.199141  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:39:40.208739  400041 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:39:40.208814  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1030 18:39:40.217747  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1030 18:39:40.233431  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:39:40.249487  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1030 18:39:40.265703  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1030 18:39:40.282041  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:39:40.285892  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:40.297652  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:40.407338  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:39:40.424747  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.141
	I1030 18:39:40.424777  400041 certs.go:194] generating shared ca certs ...
	I1030 18:39:40.424817  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.425024  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:39:40.425082  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:39:40.425095  400041 certs.go:256] generating profile certs ...
	I1030 18:39:40.425172  400041 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:39:40.425193  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt with IP's: []
	I1030 18:39:40.472361  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt ...
	I1030 18:39:40.472390  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt: {Name:mkc5230ad33247edd4a8c72c6c48a87fa9cedd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472564  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key ...
	I1030 18:39:40.472575  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key: {Name:mk2476b29598bb2a9232a00c23240eb0f41fcc47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472659  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0
	I1030 18:39:40.472675  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.254]
	I1030 18:39:40.623668  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 ...
	I1030 18:39:40.623703  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0: {Name:mk527af1a36a41edb105de0ac73afcba6a07951e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623865  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 ...
	I1030 18:39:40.623878  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0: {Name:mk9d3db1edca5a6647774a57300dfc12ee759cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623943  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:39:40.624014  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:39:40.624064  400041 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:39:40.624080  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt with IP's: []
	I1030 18:39:40.681800  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt ...
	I1030 18:39:40.681833  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt: {Name:mke6c9a4a487817027f382c9db962d8a5023b692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.681991  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key ...
	I1030 18:39:40.682001  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key: {Name:mkcef517ac3b25f9738ab0dc212031ff215f0337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.682069  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:39:40.682086  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:39:40.682097  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:39:40.682118  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:39:40.682131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:39:40.682142  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:39:40.682154  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:39:40.682166  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:39:40.682213  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:39:40.682246  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:39:40.682256  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:39:40.682279  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:39:40.682301  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:39:40.682325  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:39:40.682365  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:40.682398  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.682412  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:40.682432  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:39:40.683028  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:39:40.708651  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:39:40.731313  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:39:40.753734  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:39:40.776131  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 18:39:40.799436  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:39:40.822746  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:39:40.845786  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:39:40.869789  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:39:40.893594  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:39:40.916381  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:39:40.939683  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:39:40.956310  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:39:40.962024  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:39:40.972261  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976598  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976650  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.982403  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:39:40.992755  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:39:41.003221  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007653  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007709  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.013218  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:39:41.023594  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:39:41.033911  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038607  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038673  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.044095  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:39:41.054143  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:39:41.058096  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:39:41.058161  400041 kubeadm.go:392] StartCluster: {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:41.058251  400041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:39:41.058301  400041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:39:41.095584  400041 cri.go:89] found id: ""
	I1030 18:39:41.095649  400041 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 18:39:41.105071  400041 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 18:39:41.114164  400041 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 18:39:41.122895  400041 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 18:39:41.122908  400041 kubeadm.go:157] found existing configuration files:
	
	I1030 18:39:41.122941  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 18:39:41.131529  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 18:39:41.131566  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 18:39:41.140275  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 18:39:41.148757  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 18:39:41.148813  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 18:39:41.160794  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.184302  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 18:39:41.184383  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.207263  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 18:39:41.228026  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 18:39:41.228102  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 18:39:41.237111  400041 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 18:39:41.445375  400041 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 18:39:52.585541  400041 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 18:39:52.585616  400041 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 18:39:52.585710  400041 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 18:39:52.585832  400041 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 18:39:52.585956  400041 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 18:39:52.586025  400041 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 18:39:52.587620  400041 out.go:235]   - Generating certificates and keys ...
	I1030 18:39:52.587688  400041 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 18:39:52.587761  400041 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 18:39:52.587836  400041 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 18:39:52.587896  400041 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 18:39:52.587987  400041 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 18:39:52.588061  400041 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 18:39:52.588139  400041 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 18:39:52.588270  400041 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588347  400041 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 18:39:52.588511  400041 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588616  400041 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 18:39:52.588707  400041 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 18:39:52.588773  400041 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 18:39:52.588839  400041 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 18:39:52.588887  400041 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 18:39:52.588932  400041 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 18:39:52.589004  400041 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 18:39:52.589094  400041 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 18:39:52.589146  400041 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 18:39:52.589229  400041 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 18:39:52.589332  400041 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 18:39:52.590758  400041 out.go:235]   - Booting up control plane ...
	I1030 18:39:52.590844  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 18:39:52.590916  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 18:39:52.590968  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 18:39:52.591065  400041 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 18:39:52.591191  400041 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 18:39:52.591253  400041 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 18:39:52.591410  400041 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 18:39:52.591536  400041 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 18:39:52.591616  400041 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003124871s
	I1030 18:39:52.591709  400041 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 18:39:52.591794  400041 kubeadm.go:310] [api-check] The API server is healthy after 5.662047877s
	I1030 18:39:52.591944  400041 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 18:39:52.592125  400041 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 18:39:52.592192  400041 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 18:39:52.592401  400041 kubeadm.go:310] [mark-control-plane] Marking the node ha-174833 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 18:39:52.592456  400041 kubeadm.go:310] [bootstrap-token] Using token: g2rz2p.8nzvncljb4xmvqws
	I1030 18:39:52.593760  400041 out.go:235]   - Configuring RBAC rules ...
	I1030 18:39:52.593856  400041 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 18:39:52.593940  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 18:39:52.594118  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 18:39:52.594304  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 18:39:52.594473  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 18:39:52.594624  400041 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 18:39:52.594785  400041 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 18:39:52.594849  400041 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 18:39:52.594921  400041 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 18:39:52.594940  400041 kubeadm.go:310] 
	I1030 18:39:52.594996  400041 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 18:39:52.595002  400041 kubeadm.go:310] 
	I1030 18:39:52.595066  400041 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 18:39:52.595072  400041 kubeadm.go:310] 
	I1030 18:39:52.595106  400041 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 18:39:52.595167  400041 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 18:39:52.595211  400041 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 18:39:52.595217  400041 kubeadm.go:310] 
	I1030 18:39:52.595262  400041 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 18:39:52.595268  400041 kubeadm.go:310] 
	I1030 18:39:52.595323  400041 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 18:39:52.595331  400041 kubeadm.go:310] 
	I1030 18:39:52.595374  400041 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 18:39:52.595436  400041 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 18:39:52.595501  400041 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 18:39:52.595508  400041 kubeadm.go:310] 
	I1030 18:39:52.595599  400041 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 18:39:52.595699  400041 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 18:39:52.595708  400041 kubeadm.go:310] 
	I1030 18:39:52.595831  400041 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.595945  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 18:39:52.595970  400041 kubeadm.go:310] 	--control-plane 
	I1030 18:39:52.595975  400041 kubeadm.go:310] 
	I1030 18:39:52.596043  400041 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 18:39:52.596049  400041 kubeadm.go:310] 
	I1030 18:39:52.596119  400041 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.596231  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 18:39:52.596243  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:52.596250  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:52.597696  400041 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1030 18:39:52.598955  400041 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 18:39:52.605469  400041 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1030 18:39:52.605483  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1030 18:39:52.624363  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 18:39:53.005173  400041 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833 minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=true
	I1030 18:39:53.173403  400041 ops.go:34] apiserver oom_adj: -16
	I1030 18:39:53.173409  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.674475  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.173792  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.673541  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.174225  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.674171  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.765485  400041 kubeadm.go:1113] duration metric: took 2.760286908s to wait for elevateKubeSystemPrivileges
	I1030 18:39:55.765536  400041 kubeadm.go:394] duration metric: took 14.707379512s to StartCluster
	I1030 18:39:55.765560  400041 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.765652  400041 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.766341  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.766618  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 18:39:55.766613  400041 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:55.766643  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:39:55.766652  400041 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 18:39:55.766742  400041 addons.go:69] Setting storage-provisioner=true in profile "ha-174833"
	I1030 18:39:55.766762  400041 addons.go:234] Setting addon storage-provisioner=true in "ha-174833"
	I1030 18:39:55.766765  400041 addons.go:69] Setting default-storageclass=true in profile "ha-174833"
	I1030 18:39:55.766787  400041 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174833"
	I1030 18:39:55.766793  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.766837  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:55.767201  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767204  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767229  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.767233  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.782451  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I1030 18:39:55.783028  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.783605  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.783632  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.783733  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I1030 18:39:55.784013  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.784063  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.784233  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.784551  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.784576  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.784948  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.785512  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.785543  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.786284  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.786639  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 18:39:55.787187  400041 cert_rotation.go:140] Starting client certificate rotation controller
	I1030 18:39:55.787507  400041 addons.go:234] Setting addon default-storageclass=true in "ha-174833"
	I1030 18:39:55.787549  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.787801  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.787828  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.801215  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I1030 18:39:55.801753  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.802347  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.802374  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.802582  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I1030 18:39:55.802754  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.802945  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.802995  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.803462  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.803485  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.803870  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.804468  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.804521  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.804806  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.807396  400041 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 18:39:55.808701  400041 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:55.808721  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 18:39:55.808736  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.812067  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812493  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.812517  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812683  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.812860  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.813040  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.813181  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.820594  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I1030 18:39:55.821053  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.821596  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.821614  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.821907  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.822100  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.823784  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.824021  400041 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.824035  400041 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 18:39:55.824050  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.826783  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827199  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.827215  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827366  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.827540  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.827698  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.827825  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.887739  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 18:39:55.976821  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.987770  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:56.358196  400041 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 18:39:56.358229  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358248  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358534  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358554  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358563  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358570  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358835  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.358837  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358856  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358917  400041 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 18:39:56.358934  400041 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 18:39:56.359097  400041 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1030 18:39:56.359111  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.359120  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.359128  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.431588  400041 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
	I1030 18:39:56.432175  400041 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1030 18:39:56.432191  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.432198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.432202  400041 round_trippers.go:473]     Content-Type: application/json
	I1030 18:39:56.432205  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.436115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:39:56.436287  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.436303  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.436618  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.436664  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.436672  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.590846  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.590868  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591203  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591227  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.591236  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.591244  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591478  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.591507  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591514  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.593000  400041 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1030 18:39:56.594031  400041 addons.go:510] duration metric: took 827.372801ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1030 18:39:56.594084  400041 start.go:246] waiting for cluster config update ...
	I1030 18:39:56.594100  400041 start.go:255] writing updated cluster config ...
	I1030 18:39:56.595822  400041 out.go:201] 
	I1030 18:39:56.597023  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:56.597115  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.598537  400041 out.go:177] * Starting "ha-174833-m02" control-plane node in "ha-174833" cluster
	I1030 18:39:56.599471  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:56.599502  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:56.599603  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:56.599621  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:56.599722  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.599927  400041 start.go:360] acquireMachinesLock for ha-174833-m02: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:56.599988  400041 start.go:364] duration metric: took 32.769µs to acquireMachinesLock for "ha-174833-m02"
	I1030 18:39:56.600025  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:56.600106  400041 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1030 18:39:56.601604  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:56.601698  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:56.601732  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:56.616291  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I1030 18:39:56.616777  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:56.617304  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:56.617323  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:56.617636  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:56.617791  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:39:56.617923  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:39:56.618073  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:56.618098  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:56.618131  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:56.618179  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618201  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618275  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:56.618304  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618320  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618344  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:56.618355  400041 main.go:141] libmachine: (ha-174833-m02) Calling .PreCreateCheck
	I1030 18:39:56.618511  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:39:56.618831  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:56.618844  400041 main.go:141] libmachine: (ha-174833-m02) Calling .Create
	I1030 18:39:56.618962  400041 main.go:141] libmachine: (ha-174833-m02) Creating KVM machine...
	I1030 18:39:56.620046  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing default KVM network
	I1030 18:39:56.620129  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing private KVM network mk-ha-174833
	I1030 18:39:56.620269  400041 main.go:141] libmachine: (ha-174833-m02) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:56.620295  400041 main.go:141] libmachine: (ha-174833-m02) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:56.620361  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.620250  400406 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:56.620446  400041 main.go:141] libmachine: (ha-174833-m02) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:56.895932  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.895765  400406 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa...
	I1030 18:39:57.037260  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037116  400406 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk...
	I1030 18:39:57.037293  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing magic tar header
	I1030 18:39:57.037303  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing SSH key tar header
	I1030 18:39:57.037311  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037233  400406 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:57.037321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02
	I1030 18:39:57.037404  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 (perms=drwx------)
	I1030 18:39:57.037429  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:57.037440  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:57.037453  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:57.037469  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:57.037479  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:57.037487  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:57.037494  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home
	I1030 18:39:57.037515  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Skipping /home - not owner
	I1030 18:39:57.037531  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:57.037546  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:57.037559  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:57.037569  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:57.037577  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:57.038511  400041 main.go:141] libmachine: (ha-174833-m02) define libvirt domain using xml: 
	I1030 18:39:57.038531  400041 main.go:141] libmachine: (ha-174833-m02) <domain type='kvm'>
	I1030 18:39:57.038538  400041 main.go:141] libmachine: (ha-174833-m02)   <name>ha-174833-m02</name>
	I1030 18:39:57.038542  400041 main.go:141] libmachine: (ha-174833-m02)   <memory unit='MiB'>2200</memory>
	I1030 18:39:57.038549  400041 main.go:141] libmachine: (ha-174833-m02)   <vcpu>2</vcpu>
	I1030 18:39:57.038556  400041 main.go:141] libmachine: (ha-174833-m02)   <features>
	I1030 18:39:57.038563  400041 main.go:141] libmachine: (ha-174833-m02)     <acpi/>
	I1030 18:39:57.038569  400041 main.go:141] libmachine: (ha-174833-m02)     <apic/>
	I1030 18:39:57.038577  400041 main.go:141] libmachine: (ha-174833-m02)     <pae/>
	I1030 18:39:57.038587  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.038594  400041 main.go:141] libmachine: (ha-174833-m02)   </features>
	I1030 18:39:57.038601  400041 main.go:141] libmachine: (ha-174833-m02)   <cpu mode='host-passthrough'>
	I1030 18:39:57.038605  400041 main.go:141] libmachine: (ha-174833-m02)   
	I1030 18:39:57.038610  400041 main.go:141] libmachine: (ha-174833-m02)   </cpu>
	I1030 18:39:57.038636  400041 main.go:141] libmachine: (ha-174833-m02)   <os>
	I1030 18:39:57.038660  400041 main.go:141] libmachine: (ha-174833-m02)     <type>hvm</type>
	I1030 18:39:57.038683  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='cdrom'/>
	I1030 18:39:57.038700  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='hd'/>
	I1030 18:39:57.038708  400041 main.go:141] libmachine: (ha-174833-m02)     <bootmenu enable='no'/>
	I1030 18:39:57.038712  400041 main.go:141] libmachine: (ha-174833-m02)   </os>
	I1030 18:39:57.038717  400041 main.go:141] libmachine: (ha-174833-m02)   <devices>
	I1030 18:39:57.038725  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='cdrom'>
	I1030 18:39:57.038734  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/boot2docker.iso'/>
	I1030 18:39:57.038744  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:57.038752  400041 main.go:141] libmachine: (ha-174833-m02)       <readonly/>
	I1030 18:39:57.038764  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038780  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='disk'>
	I1030 18:39:57.038790  400041 main.go:141] libmachine: (ha-174833-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:57.038805  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk'/>
	I1030 18:39:57.038815  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hda' bus='virtio'/>
	I1030 18:39:57.038825  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038832  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038844  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='mk-ha-174833'/>
	I1030 18:39:57.038858  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038874  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038892  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038901  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='default'/>
	I1030 18:39:57.038911  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038922  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038931  400041 main.go:141] libmachine: (ha-174833-m02)     <serial type='pty'>
	I1030 18:39:57.038937  400041 main.go:141] libmachine: (ha-174833-m02)       <target port='0'/>
	I1030 18:39:57.038943  400041 main.go:141] libmachine: (ha-174833-m02)     </serial>
	I1030 18:39:57.038948  400041 main.go:141] libmachine: (ha-174833-m02)     <console type='pty'>
	I1030 18:39:57.038955  400041 main.go:141] libmachine: (ha-174833-m02)       <target type='serial' port='0'/>
	I1030 18:39:57.038981  400041 main.go:141] libmachine: (ha-174833-m02)     </console>
	I1030 18:39:57.039004  400041 main.go:141] libmachine: (ha-174833-m02)     <rng model='virtio'>
	I1030 18:39:57.039017  400041 main.go:141] libmachine: (ha-174833-m02)       <backend model='random'>/dev/random</backend>
	I1030 18:39:57.039026  400041 main.go:141] libmachine: (ha-174833-m02)     </rng>
	I1030 18:39:57.039033  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039042  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039050  400041 main.go:141] libmachine: (ha-174833-m02)   </devices>
	I1030 18:39:57.039059  400041 main.go:141] libmachine: (ha-174833-m02) </domain>
	I1030 18:39:57.039073  400041 main.go:141] libmachine: (ha-174833-m02) 
	I1030 18:39:57.045751  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:a3:4c:dc in network default
	I1030 18:39:57.046326  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring networks are active...
	I1030 18:39:57.046349  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:57.047038  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network default is active
	I1030 18:39:57.047398  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network mk-ha-174833 is active
	I1030 18:39:57.047750  400041 main.go:141] libmachine: (ha-174833-m02) Getting domain xml...
	I1030 18:39:57.048296  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:58.272260  400041 main.go:141] libmachine: (ha-174833-m02) Waiting to get IP...
	I1030 18:39:58.273021  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.273425  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.273496  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.273425  400406 retry.go:31] will retry after 283.659874ms: waiting for machine to come up
	I1030 18:39:58.559077  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.559572  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.559595  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.559530  400406 retry.go:31] will retry after 285.421922ms: waiting for machine to come up
	I1030 18:39:58.847321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.847766  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.847795  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.847719  400406 retry.go:31] will retry after 459.416019ms: waiting for machine to come up
	I1030 18:39:59.308465  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.308944  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.309003  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.308931  400406 retry.go:31] will retry after 572.494843ms: waiting for machine to come up
	I1030 18:39:59.882664  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.883063  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.883097  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.883044  400406 retry.go:31] will retry after 513.18543ms: waiting for machine to come up
	I1030 18:40:00.397389  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:00.397747  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:00.397783  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:00.397729  400406 retry.go:31] will retry after 755.433082ms: waiting for machine to come up
	I1030 18:40:01.155395  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:01.155948  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:01.155979  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:01.155903  400406 retry.go:31] will retry after 1.038364995s: waiting for machine to come up
	I1030 18:40:02.195482  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:02.195950  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:02.195980  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:02.195911  400406 retry.go:31] will retry after 1.004508468s: waiting for machine to come up
	I1030 18:40:03.201825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:03.202261  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:03.202291  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:03.202205  400406 retry.go:31] will retry after 1.786384374s: waiting for machine to come up
	I1030 18:40:04.989943  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:04.990350  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:04.990371  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:04.990297  400406 retry.go:31] will retry after 1.895963981s: waiting for machine to come up
	I1030 18:40:06.888049  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:06.888464  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:06.888488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:06.888417  400406 retry.go:31] will retry after 1.948037202s: waiting for machine to come up
	I1030 18:40:08.839488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:08.839847  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:08.839869  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:08.839824  400406 retry.go:31] will retry after 3.202281785s: waiting for machine to come up
	I1030 18:40:12.043324  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:12.043675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:12.043695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:12.043618  400406 retry.go:31] will retry after 3.877667252s: waiting for machine to come up
	I1030 18:40:15.924012  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:15.924431  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:15.924456  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:15.924364  400406 retry.go:31] will retry after 3.471906375s: waiting for machine to come up
	I1030 18:40:19.399252  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has current primary IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399693  400041 main.go:141] libmachine: (ha-174833-m02) Found IP for machine: 192.168.39.67
	I1030 18:40:19.399744  400041 main.go:141] libmachine: (ha-174833-m02) Reserving static IP address...
	I1030 18:40:19.400103  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find host DHCP lease matching {name: "ha-174833-m02", mac: "52:54:00:87:fa:1a", ip: "192.168.39.67"} in network mk-ha-174833
	I1030 18:40:19.473268  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Getting to WaitForSSH function...
	I1030 18:40:19.473299  400041 main.go:141] libmachine: (ha-174833-m02) Reserved static IP address: 192.168.39.67
	I1030 18:40:19.473352  400041 main.go:141] libmachine: (ha-174833-m02) Waiting for SSH to be available...
	I1030 18:40:19.476054  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476545  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.476573  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476733  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH client type: external
	I1030 18:40:19.476781  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa (-rw-------)
	I1030 18:40:19.476820  400041 main.go:141] libmachine: (ha-174833-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:40:19.476836  400041 main.go:141] libmachine: (ha-174833-m02) DBG | About to run SSH command:
	I1030 18:40:19.476843  400041 main.go:141] libmachine: (ha-174833-m02) DBG | exit 0
	I1030 18:40:19.602200  400041 main.go:141] libmachine: (ha-174833-m02) DBG | SSH cmd err, output: <nil>: 
	I1030 18:40:19.602526  400041 main.go:141] libmachine: (ha-174833-m02) KVM machine creation complete!
	I1030 18:40:19.602867  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:19.603528  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603721  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603921  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:40:19.603937  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetState
	I1030 18:40:19.605043  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:40:19.605054  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:40:19.605059  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:40:19.605064  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.607164  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607533  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.607561  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607643  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.607921  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608107  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608292  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.608458  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.608704  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.608730  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:40:19.709697  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:19.709726  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:40:19.709734  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.712480  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.712863  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.712908  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.713089  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.713318  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713620  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.713800  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.714020  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.714034  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:40:19.823287  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:40:19.823400  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:40:19.823413  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:40:19.823424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823703  400041 buildroot.go:166] provisioning hostname "ha-174833-m02"
	I1030 18:40:19.823731  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823950  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.826635  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827060  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.827086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827137  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.827303  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827602  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.827740  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.827922  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.827936  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m02 && echo "ha-174833-m02" | sudo tee /etc/hostname
	I1030 18:40:19.945348  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m02
	
	I1030 18:40:19.945376  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.948392  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948756  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.948806  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948936  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.949124  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949286  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949399  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.949565  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.949742  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.949759  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:40:20.059828  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:20.059870  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:40:20.059905  400041 buildroot.go:174] setting up certificates
	I1030 18:40:20.059915  400041 provision.go:84] configureAuth start
	I1030 18:40:20.059930  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:20.060203  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.062825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063237  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.063262  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063417  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.065380  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.065725  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065881  400041 provision.go:143] copyHostCerts
	I1030 18:40:20.065925  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066007  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:40:20.066020  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066101  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:40:20.066211  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066236  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:40:20.066244  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066288  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:40:20.066357  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066380  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:40:20.066386  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066420  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:40:20.066508  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m02 san=[127.0.0.1 192.168.39.67 ha-174833-m02 localhost minikube]
	I1030 18:40:20.314819  400041 provision.go:177] copyRemoteCerts
	I1030 18:40:20.314902  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:40:20.314940  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.317541  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.317873  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.317916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.318094  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.318304  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.318547  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.318726  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.405714  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:40:20.405820  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:40:20.431726  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:40:20.431798  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:40:20.455138  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:40:20.455222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 18:40:20.477773  400041 provision.go:87] duration metric: took 417.842724ms to configureAuth
	I1030 18:40:20.477806  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:40:20.478018  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:20.478120  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.480885  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481224  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.481250  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.481637  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481775  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481966  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.482148  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.482322  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.482338  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:40:20.706339  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:40:20.706375  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:40:20.706387  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetURL
	I1030 18:40:20.707589  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using libvirt version 6000000
	I1030 18:40:20.709597  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.709934  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.709964  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.710106  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:40:20.710135  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:40:20.710147  400041 client.go:171] duration metric: took 24.092036555s to LocalClient.Create
	I1030 18:40:20.710176  400041 start.go:167] duration metric: took 24.092106335s to libmachine.API.Create "ha-174833"
	I1030 18:40:20.710186  400041 start.go:293] postStartSetup for "ha-174833-m02" (driver="kvm2")
	I1030 18:40:20.710195  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:40:20.710231  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.710468  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:40:20.710503  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.712432  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712689  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.712717  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712824  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.713017  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.713185  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.713308  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.793164  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:40:20.797557  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:40:20.797583  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:40:20.797648  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:40:20.797720  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:40:20.797730  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:40:20.797807  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:40:20.807375  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:20.830866  400041 start.go:296] duration metric: took 120.664021ms for postStartSetup
	I1030 18:40:20.830929  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:20.831701  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.834714  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.835116  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835438  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:40:20.835668  400041 start.go:128] duration metric: took 24.235548343s to createHost
	I1030 18:40:20.835700  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.837613  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.837888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.837916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.838041  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.838176  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838317  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.838592  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.838755  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.838765  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:40:20.939393  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313620.914818123
	
	I1030 18:40:20.939419  400041 fix.go:216] guest clock: 1730313620.914818123
	I1030 18:40:20.939430  400041 fix.go:229] Guest: 2024-10-30 18:40:20.914818123 +0000 UTC Remote: 2024-10-30 18:40:20.835684734 +0000 UTC m=+67.590472244 (delta=79.133389ms)
	I1030 18:40:20.939453  400041 fix.go:200] guest clock delta is within tolerance: 79.133389ms
	I1030 18:40:20.939460  400041 start.go:83] releasing machines lock for "ha-174833-m02", held for 24.339459492s
	I1030 18:40:20.939487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.939802  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.942445  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.942801  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.942827  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.945268  400041 out.go:177] * Found network options:
	I1030 18:40:20.946721  400041 out.go:177]   - NO_PROXY=192.168.39.141
	W1030 18:40:20.947877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.947925  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948482  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948657  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948763  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:40:20.948808  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	W1030 18:40:20.948877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.948974  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:40:20.948998  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.951510  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951591  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951860  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951890  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951926  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.952047  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952193  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952262  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952409  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952476  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952535  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952595  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.952723  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:21.182304  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:40:21.188738  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:40:21.188808  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:40:21.205984  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:40:21.206007  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:40:21.206074  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:40:21.221839  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:40:21.235753  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:40:21.235807  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:40:21.249998  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:40:21.263401  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:40:21.372667  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:40:21.535477  400041 docker.go:233] disabling docker service ...
	I1030 18:40:21.535567  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:40:21.549384  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:40:21.561708  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:40:21.680746  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:40:21.800498  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:40:21.815096  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:40:21.833550  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:40:21.833622  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.843823  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:40:21.843902  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.854106  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.864400  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.874387  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:40:21.884560  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.895371  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.913811  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.924236  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:40:21.933153  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:40:21.933202  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:40:21.946248  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:40:21.955404  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:22.069005  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:40:22.157442  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:40:22.157509  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:40:22.162047  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:40:22.162100  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:40:22.165636  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:40:22.205156  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:40:22.205267  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.231913  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.261339  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:40:22.262679  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:40:22.263832  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:22.266556  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.266888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:22.266915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.267123  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:40:22.271259  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:22.283359  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:40:22.283542  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:22.283792  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.283835  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.298878  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1030 18:40:22.299305  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.299796  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.299822  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.300116  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.300325  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:40:22.301824  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:22.302129  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.302167  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.316968  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I1030 18:40:22.317445  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.317883  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.317906  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.318227  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.318396  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:22.318552  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.67
	I1030 18:40:22.318566  400041 certs.go:194] generating shared ca certs ...
	I1030 18:40:22.318581  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.318722  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:40:22.318763  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:40:22.318772  400041 certs.go:256] generating profile certs ...
	I1030 18:40:22.318861  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:40:22.318884  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801
	I1030 18:40:22.318898  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.254]
	I1030 18:40:22.389619  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 ...
	I1030 18:40:22.389649  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801: {Name:mk69c03eb6b5f0b4d0acc4a4891d260deacb4aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389835  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 ...
	I1030 18:40:22.389853  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801: {Name:mkc4587720139321b37dc723905edfa912a066e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389954  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:40:22.390078  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:40:22.390209  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:40:22.390226  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:40:22.390240  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:40:22.390253  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:40:22.390265  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:40:22.390276  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:40:22.390291  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:40:22.390303  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:40:22.390314  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:40:22.390363  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:40:22.390392  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:40:22.390401  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:40:22.390423  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:40:22.390447  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:40:22.390467  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:40:22.390526  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:22.390551  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:22.390567  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.390579  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.390609  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:22.393533  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.393916  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:22.393937  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.394139  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:22.394328  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:22.394468  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:22.394599  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:22.466820  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:40:22.472172  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:40:22.483413  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:40:22.487802  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:40:22.498142  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:40:22.502005  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:40:22.511789  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:40:22.516194  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:40:22.526092  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:40:22.530300  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:40:22.539761  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:40:22.543659  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:40:22.554032  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:40:22.579429  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:40:22.603366  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:40:22.627011  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:40:22.649824  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1030 18:40:22.675859  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 18:40:22.702878  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:40:22.729191  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:40:22.755783  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:40:22.781937  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:40:22.806557  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:40:22.829559  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:40:22.845492  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:40:22.861140  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:40:22.877798  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:40:22.894364  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:40:22.910766  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:40:22.926975  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:40:22.944058  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:40:22.949888  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:40:22.960383  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964756  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964810  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.970419  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:40:22.980880  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:40:22.991033  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995374  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995440  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:40:23.000879  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:40:23.011335  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:40:23.021800  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026327  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026385  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.032188  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:40:23.042278  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:40:23.046274  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:40:23.046324  400041 kubeadm.go:934] updating node {m02 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1030 18:40:23.046424  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:40:23.046460  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:40:23.046517  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:40:23.063163  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:40:23.063236  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:40:23.063297  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.072465  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:40:23.072510  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.081550  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:40:23.081576  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.081589  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1030 18:40:23.081602  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1030 18:40:23.081619  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.085961  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:40:23.085992  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:40:24.328288  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.328373  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.333326  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:40:24.333359  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:40:24.830276  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:40:24.845774  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.845893  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.850314  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:40:24.850355  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:40:25.162230  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:40:25.172064  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1030 18:40:25.188645  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:40:25.204815  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:40:25.221977  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:40:25.225934  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:25.237891  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:25.349561  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:25.366698  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:25.367180  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:25.367246  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:25.384828  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I1030 18:40:25.385432  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:25.386031  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:25.386061  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:25.386434  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:25.386621  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:25.386806  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:40:25.386959  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:40:25.386986  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:25.389976  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390481  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:25.390522  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390674  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:25.390889  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:25.391033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:25.391170  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:25.547459  400041 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:25.547519  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443"
	I1030 18:40:46.568187  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443": (21.020635274s)
	I1030 18:40:46.568229  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:40:47.028345  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m02 minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:40:47.150726  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:40:47.264922  400041 start.go:319] duration metric: took 21.878113098s to joinCluster
	I1030 18:40:47.265016  400041 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:47.265346  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:47.267451  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:40:47.268676  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:47.482830  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:47.498911  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:40:47.499271  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:40:47.499361  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:40:47.499634  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:40:47.499754  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:47.499765  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:47.499776  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:47.499780  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:47.513589  400041 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1030 18:40:48.000627  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.000717  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.000732  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.000739  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.005027  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:48.500527  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.500553  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.500562  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.500566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.507486  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:40:48.999957  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.999981  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.999992  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.999998  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.004072  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:49.500009  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:49.500034  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:49.500044  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:49.500049  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.503688  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:49.504299  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:50.000762  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.000787  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.000798  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.000804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.004710  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.500222  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.500249  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.500261  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.500268  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.503800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.999915  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.999941  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.999949  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.999953  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.003089  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:51.500241  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:51.500270  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:51.500282  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:51.500288  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.503181  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:52.000665  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.000687  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.000696  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.000701  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.004020  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:52.004537  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:52.500784  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.500807  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.500815  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.500820  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.503534  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:53.000339  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.000361  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.000372  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.000377  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.003704  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:53.500343  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.500365  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.500373  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.500378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.503510  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.000354  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.000381  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.000395  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.000403  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.004115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.004763  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:54.500127  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.500152  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.500161  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.500166  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.503778  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.000747  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.000778  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.000791  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.000797  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.004570  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.500357  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.500405  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.500415  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.500420  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.504113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:56.000848  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.000872  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.000890  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.000895  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.005204  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:56.006300  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:56.500116  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.500139  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.500149  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.500156  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.503736  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.000020  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.000047  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.000059  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.000064  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.003517  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.500475  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.500507  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.500519  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.500528  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.504454  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.999844  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.999871  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.999880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.999884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.003233  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:58.500239  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:58.500265  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:58.500275  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:58.500280  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.503241  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:58.504056  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:59.000302  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.000325  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.000335  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.000338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.003378  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.500257  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.500293  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.500305  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.500311  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.503678  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.999943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.999974  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.999984  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.999988  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.003694  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.499870  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:00.499894  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:00.499903  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:00.499906  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.503912  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.504852  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:01.000256  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.000287  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.000303  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.000310  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.004687  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:01.500249  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.500275  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.500286  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.500292  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.503725  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.000125  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.000149  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.000159  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.000163  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.003110  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:02.500738  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.500764  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.500774  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.500779  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.504318  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.504919  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:03.000323  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.000348  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.000361  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.000369  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.003869  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:03.500542  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.500568  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.500579  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.500585  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.503602  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:04.000594  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.000622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.000633  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.000639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.003714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.500712  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.500736  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.500746  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.500752  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.503791  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.999910  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.999934  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.999943  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.999948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.003533  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:05.004088  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:05.500597  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:05.500622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:05.500630  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:05.500639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.503501  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:06.000616  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.000647  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.000659  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.000667  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.004719  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:06.500833  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.500855  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.500864  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.500868  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.504070  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.000429  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.000469  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.000481  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.000487  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.003689  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.004389  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:07.500634  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.500659  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.500670  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.500676  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.503714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.000797  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.000823  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.000835  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.000839  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.004162  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.500552  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.500576  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.500584  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.500588  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.503781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.504368  400041 node_ready.go:49] node "ha-174833-m02" has status "Ready":"True"
	I1030 18:41:08.504387  400041 node_ready.go:38] duration metric: took 21.004733688s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:41:08.504399  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:08.504510  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:08.504522  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.504533  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.504540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.508519  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.514243  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.514348  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:41:08.514359  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.514370  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.514375  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.517179  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.518000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.518014  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.518021  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.518026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.520277  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.520732  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.520749  400041 pod_ready.go:82] duration metric: took 6.484522ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520758  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520818  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:41:08.520826  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.520832  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.520837  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.523187  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.523748  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.523763  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.523770  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.523773  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.525598  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.526045  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.526061  400041 pod_ready.go:82] duration metric: took 5.296844ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526073  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:41:08.526137  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.526147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.526155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.528137  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.528632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.528646  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.528653  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.528656  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.530536  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.530970  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.530985  400041 pod_ready.go:82] duration metric: took 4.904104ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.530995  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.531044  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:41:08.531054  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.531063  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.531071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.532895  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.533572  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.533585  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.533592  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.533598  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.535476  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.535920  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.535936  400041 pod_ready.go:82] duration metric: took 4.934707ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.535947  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.701353  400041 request.go:632] Waited for 165.322436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701427  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701434  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.701445  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.701455  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.704722  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.900709  400041 request.go:632] Waited for 195.283762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900771  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900777  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.900787  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.900793  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.903675  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.904204  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.904224  400041 pod_ready.go:82] duration metric: took 368.270404ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.904235  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.101325  400041 request.go:632] Waited for 196.99596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101392  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101397  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.101406  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.101414  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.104943  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.301209  400041 request.go:632] Waited for 195.378832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301280  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301286  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.301294  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.301299  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.304703  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.305150  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.305171  400041 pod_ready.go:82] duration metric: took 400.929601ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.305183  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.501368  400041 request.go:632] Waited for 196.079315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501455  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501468  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.501478  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.501486  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.505228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.701240  400041 request.go:632] Waited for 195.369784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701322  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.701331  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.701334  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.703994  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:09.704752  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.704770  400041 pod_ready.go:82] duration metric: took 399.581191ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.704781  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.900901  400041 request.go:632] Waited for 196.026591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900964  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900969  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.900978  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.900983  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.904074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.101112  400041 request.go:632] Waited for 196.368613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101194  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101205  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.101214  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.101226  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.104324  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.104744  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.104763  400041 pod_ready.go:82] duration metric: took 399.976925ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.104774  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.300860  400041 request.go:632] Waited for 196.007769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300949  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.300957  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.300968  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.304042  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.501291  400041 request.go:632] Waited for 196.406771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501358  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501363  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.501372  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.501378  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.504471  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.504946  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.504966  400041 pod_ready.go:82] duration metric: took 400.186291ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.504985  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.701128  400041 request.go:632] Waited for 196.042962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701198  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701203  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.701211  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.701218  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.704595  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.900756  400041 request.go:632] Waited for 195.290492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900855  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900861  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.900869  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.900878  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.904332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.904829  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.904849  400041 pod_ready.go:82] duration metric: took 399.858433ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.904860  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.101047  400041 request.go:632] Waited for 196.091867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101112  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101117  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.101125  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.101130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.104800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.300654  400041 request.go:632] Waited for 195.298322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300720  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300731  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.300740  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.300743  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.304342  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.304796  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.304815  400041 pod_ready.go:82] duration metric: took 399.947891ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.304826  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.500975  400041 request.go:632] Waited for 196.04993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501040  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501045  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.501052  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.501057  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.504438  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.701379  400041 request.go:632] Waited for 196.340488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701443  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701449  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.701457  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.701462  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.704386  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:11.704831  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.704850  400041 pod_ready.go:82] duration metric: took 400.015715ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.704863  400041 pod_ready.go:39] duration metric: took 3.200450336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:11.704882  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:41:11.704944  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:41:11.723542  400041 api_server.go:72] duration metric: took 24.458488953s to wait for apiserver process to appear ...
	I1030 18:41:11.723564  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:41:11.723583  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:41:11.729129  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:41:11.729191  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:41:11.729199  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.729206  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.729213  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.729902  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:41:11.729987  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:41:11.730004  400041 api_server.go:131] duration metric: took 6.434971ms to wait for apiserver health ...
	I1030 18:41:11.730015  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:41:11.901454  400041 request.go:632] Waited for 171.341792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901536  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901542  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.901550  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.901554  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.906457  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:11.911360  400041 system_pods.go:59] 17 kube-system pods found
	I1030 18:41:11.911389  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:11.911396  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:11.911402  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:11.911408  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:11.911413  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:11.911418  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:11.911424  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:11.911432  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:11.911437  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:11.911440  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:11.911444  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:11.911447  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:11.911452  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:11.911458  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:11.911461  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:11.911464  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:11.911467  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:11.911474  400041 system_pods.go:74] duration metric: took 181.449525ms to wait for pod list to return data ...
	I1030 18:41:11.911484  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:41:12.100968  400041 request.go:632] Waited for 189.365167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101038  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.101046  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.101054  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.104878  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:12.105115  400041 default_sa.go:45] found service account: "default"
	I1030 18:41:12.105131  400041 default_sa.go:55] duration metric: took 193.641266ms for default service account to be created ...
	I1030 18:41:12.105141  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:41:12.301355  400041 request.go:632] Waited for 196.109942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301420  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301425  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.301433  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.301438  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.306382  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.311406  400041 system_pods.go:86] 17 kube-system pods found
	I1030 18:41:12.311437  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:12.311446  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:12.311454  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:12.311460  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:12.311465  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:12.311471  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:12.311477  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:12.311486  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:12.311492  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:12.311502  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:12.311509  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:12.311517  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:12.311525  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:12.311531  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:12.311540  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:12.311546  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:12.311554  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:12.311563  400041 system_pods.go:126] duration metric: took 206.414957ms to wait for k8s-apps to be running ...
	I1030 18:41:12.311574  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:41:12.311636  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:12.327021  400041 system_svc.go:56] duration metric: took 15.42192ms WaitForService to wait for kubelet
	I1030 18:41:12.327057  400041 kubeadm.go:582] duration metric: took 25.062007913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:41:12.327076  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:41:12.501567  400041 request.go:632] Waited for 174.380598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501638  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.501647  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.501651  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.505969  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.506702  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506731  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506744  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506747  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506751  400041 node_conditions.go:105] duration metric: took 179.67107ms to run NodePressure ...
	I1030 18:41:12.506763  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:41:12.506788  400041 start.go:255] writing updated cluster config ...
	I1030 18:41:12.509015  400041 out.go:201] 
	I1030 18:41:12.510595  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:12.510702  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.512413  400041 out.go:177] * Starting "ha-174833-m03" control-plane node in "ha-174833" cluster
	I1030 18:41:12.513538  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:41:12.513560  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:41:12.513661  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:41:12.513676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:41:12.513774  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.513991  400041 start.go:360] acquireMachinesLock for ha-174833-m03: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:41:12.514046  400041 start.go:364] duration metric: took 32.901µs to acquireMachinesLock for "ha-174833-m03"
	I1030 18:41:12.514072  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:12.514208  400041 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1030 18:41:12.515720  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:41:12.515810  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:12.515845  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:12.531298  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I1030 18:41:12.531779  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:12.532302  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:12.532328  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:12.532695  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:12.532932  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:12.533094  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:12.533248  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:41:12.533281  400041 client.go:168] LocalClient.Create starting
	I1030 18:41:12.533344  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:41:12.533389  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533410  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533483  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:41:12.533512  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533529  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533556  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:41:12.533582  400041 main.go:141] libmachine: (ha-174833-m03) Calling .PreCreateCheck
	I1030 18:41:12.533754  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:12.534141  400041 main.go:141] libmachine: Creating machine...
	I1030 18:41:12.534155  400041 main.go:141] libmachine: (ha-174833-m03) Calling .Create
	I1030 18:41:12.534316  400041 main.go:141] libmachine: (ha-174833-m03) Creating KVM machine...
	I1030 18:41:12.535469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing default KVM network
	I1030 18:41:12.535689  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing private KVM network mk-ha-174833
	I1030 18:41:12.535839  400041 main.go:141] libmachine: (ha-174833-m03) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.535890  400041 main.go:141] libmachine: (ha-174833-m03) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:41:12.535946  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.535806  400817 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.536022  400041 main.go:141] libmachine: (ha-174833-m03) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:41:12.821754  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.821614  400817 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa...
	I1030 18:41:12.940970  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940841  400817 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk...
	I1030 18:41:12.941002  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing magic tar header
	I1030 18:41:12.941016  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing SSH key tar header
	I1030 18:41:12.941027  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940965  400817 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.941045  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03
	I1030 18:41:12.941128  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 (perms=drwx------)
	I1030 18:41:12.941149  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:41:12.941160  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:41:12.941183  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:41:12.941197  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:41:12.941212  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:41:12.941227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.941239  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:41:12.941248  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:41:12.941259  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:12.941276  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:41:12.941291  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:41:12.941301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home
	I1030 18:41:12.941315  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Skipping /home - not owner
	I1030 18:41:12.942234  400041 main.go:141] libmachine: (ha-174833-m03) define libvirt domain using xml: 
	I1030 18:41:12.942260  400041 main.go:141] libmachine: (ha-174833-m03) <domain type='kvm'>
	I1030 18:41:12.942270  400041 main.go:141] libmachine: (ha-174833-m03)   <name>ha-174833-m03</name>
	I1030 18:41:12.942277  400041 main.go:141] libmachine: (ha-174833-m03)   <memory unit='MiB'>2200</memory>
	I1030 18:41:12.942286  400041 main.go:141] libmachine: (ha-174833-m03)   <vcpu>2</vcpu>
	I1030 18:41:12.942296  400041 main.go:141] libmachine: (ha-174833-m03)   <features>
	I1030 18:41:12.942305  400041 main.go:141] libmachine: (ha-174833-m03)     <acpi/>
	I1030 18:41:12.942315  400041 main.go:141] libmachine: (ha-174833-m03)     <apic/>
	I1030 18:41:12.942326  400041 main.go:141] libmachine: (ha-174833-m03)     <pae/>
	I1030 18:41:12.942335  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942346  400041 main.go:141] libmachine: (ha-174833-m03)   </features>
	I1030 18:41:12.942353  400041 main.go:141] libmachine: (ha-174833-m03)   <cpu mode='host-passthrough'>
	I1030 18:41:12.942387  400041 main.go:141] libmachine: (ha-174833-m03)   
	I1030 18:41:12.942411  400041 main.go:141] libmachine: (ha-174833-m03)   </cpu>
	I1030 18:41:12.942424  400041 main.go:141] libmachine: (ha-174833-m03)   <os>
	I1030 18:41:12.942433  400041 main.go:141] libmachine: (ha-174833-m03)     <type>hvm</type>
	I1030 18:41:12.942446  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='cdrom'/>
	I1030 18:41:12.942456  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='hd'/>
	I1030 18:41:12.942469  400041 main.go:141] libmachine: (ha-174833-m03)     <bootmenu enable='no'/>
	I1030 18:41:12.942502  400041 main.go:141] libmachine: (ha-174833-m03)   </os>
	I1030 18:41:12.942521  400041 main.go:141] libmachine: (ha-174833-m03)   <devices>
	I1030 18:41:12.942532  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='cdrom'>
	I1030 18:41:12.942543  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/boot2docker.iso'/>
	I1030 18:41:12.942552  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hdc' bus='scsi'/>
	I1030 18:41:12.942561  400041 main.go:141] libmachine: (ha-174833-m03)       <readonly/>
	I1030 18:41:12.942566  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942574  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='disk'>
	I1030 18:41:12.942581  400041 main.go:141] libmachine: (ha-174833-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:41:12.942587  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk'/>
	I1030 18:41:12.942606  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hda' bus='virtio'/>
	I1030 18:41:12.942619  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942627  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942635  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='mk-ha-174833'/>
	I1030 18:41:12.942648  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942658  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942670  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942697  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='default'/>
	I1030 18:41:12.942736  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942764  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942779  400041 main.go:141] libmachine: (ha-174833-m03)     <serial type='pty'>
	I1030 18:41:12.942790  400041 main.go:141] libmachine: (ha-174833-m03)       <target port='0'/>
	I1030 18:41:12.942802  400041 main.go:141] libmachine: (ha-174833-m03)     </serial>
	I1030 18:41:12.942812  400041 main.go:141] libmachine: (ha-174833-m03)     <console type='pty'>
	I1030 18:41:12.942823  400041 main.go:141] libmachine: (ha-174833-m03)       <target type='serial' port='0'/>
	I1030 18:41:12.942832  400041 main.go:141] libmachine: (ha-174833-m03)     </console>
	I1030 18:41:12.942841  400041 main.go:141] libmachine: (ha-174833-m03)     <rng model='virtio'>
	I1030 18:41:12.942852  400041 main.go:141] libmachine: (ha-174833-m03)       <backend model='random'>/dev/random</backend>
	I1030 18:41:12.942885  400041 main.go:141] libmachine: (ha-174833-m03)     </rng>
	I1030 18:41:12.942907  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942929  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942938  400041 main.go:141] libmachine: (ha-174833-m03)   </devices>
	I1030 18:41:12.942946  400041 main.go:141] libmachine: (ha-174833-m03) </domain>
	I1030 18:41:12.942957  400041 main.go:141] libmachine: (ha-174833-m03) 
	I1030 18:41:12.949898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:1a:b3:c5 in network default
	I1030 18:41:12.950445  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring networks are active...
	I1030 18:41:12.950469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:12.951138  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network default is active
	I1030 18:41:12.951462  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network mk-ha-174833 is active
	I1030 18:41:12.951841  400041 main.go:141] libmachine: (ha-174833-m03) Getting domain xml...
	I1030 18:41:12.952538  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:14.179359  400041 main.go:141] libmachine: (ha-174833-m03) Waiting to get IP...
	I1030 18:41:14.180307  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.180744  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.180812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.180741  400817 retry.go:31] will retry after 293.822494ms: waiting for machine to come up
	I1030 18:41:14.476270  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.476758  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.476784  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.476703  400817 retry.go:31] will retry after 283.345671ms: waiting for machine to come up
	I1030 18:41:14.761301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.761803  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.761833  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.761750  400817 retry.go:31] will retry after 299.766753ms: waiting for machine to come up
	I1030 18:41:15.063146  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.063613  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.063642  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.063557  400817 retry.go:31] will retry after 490.461635ms: waiting for machine to come up
	I1030 18:41:15.557014  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.557549  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.557577  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.557492  400817 retry.go:31] will retry after 739.117277ms: waiting for machine to come up
	I1030 18:41:16.298461  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.298926  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.298956  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.298870  400817 retry.go:31] will retry after 666.546188ms: waiting for machine to come up
	I1030 18:41:16.966687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.967172  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.967200  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.967117  400817 retry.go:31] will retry after 846.088379ms: waiting for machine to come up
	I1030 18:41:17.814898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:17.815410  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:17.815440  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:17.815362  400817 retry.go:31] will retry after 1.085711576s: waiting for machine to come up
	I1030 18:41:18.902574  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:18.902922  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:18.902952  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:18.902876  400817 retry.go:31] will retry after 1.834126575s: waiting for machine to come up
	I1030 18:41:20.739528  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:20.739890  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:20.739919  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:20.739850  400817 retry.go:31] will retry after 2.105862328s: waiting for machine to come up
	I1030 18:41:22.847426  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:22.847835  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:22.847867  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:22.847766  400817 retry.go:31] will retry after 2.441796021s: waiting for machine to come up
	I1030 18:41:25.291422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:25.291864  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:25.291888  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:25.291812  400817 retry.go:31] will retry after 2.18908754s: waiting for machine to come up
	I1030 18:41:27.484272  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:27.484720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:27.484740  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:27.484674  400817 retry.go:31] will retry after 3.249594938s: waiting for machine to come up
	I1030 18:41:30.735386  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:30.735687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:30.735711  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:30.735669  400817 retry.go:31] will retry after 5.542117345s: waiting for machine to come up
	I1030 18:41:36.279557  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.279987  400041 main.go:141] libmachine: (ha-174833-m03) Found IP for machine: 192.168.39.238
	I1030 18:41:36.280005  400041 main.go:141] libmachine: (ha-174833-m03) Reserving static IP address...
	I1030 18:41:36.280019  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.280379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "ha-174833-m03", mac: "52:54:00:76:9d:ad", ip: "192.168.39.238"} in network mk-ha-174833
	I1030 18:41:36.353555  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:36.353581  400041 main.go:141] libmachine: (ha-174833-m03) Reserved static IP address: 192.168.39.238
	I1030 18:41:36.353628  400041 main.go:141] libmachine: (ha-174833-m03) Waiting for SSH to be available...
	I1030 18:41:36.356187  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.356543  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833
	I1030 18:41:36.356569  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find defined IP address of network mk-ha-174833 interface with MAC address 52:54:00:76:9d:ad
	I1030 18:41:36.356719  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:36.356745  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:36.356795  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:36.356814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:36.356847  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:36.360778  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: exit status 255: 
	I1030 18:41:36.360804  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1030 18:41:36.360814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | command : exit 0
	I1030 18:41:36.360821  400041 main.go:141] libmachine: (ha-174833-m03) DBG | err     : exit status 255
	I1030 18:41:36.360832  400041 main.go:141] libmachine: (ha-174833-m03) DBG | output  : 
	I1030 18:41:39.361300  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:39.363671  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364021  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.364051  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364131  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:39.364170  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:39.364209  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:39.364227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:39.364236  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:39.498991  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: <nil>: 
	I1030 18:41:39.499302  400041 main.go:141] libmachine: (ha-174833-m03) KVM machine creation complete!
	I1030 18:41:39.499653  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:39.500359  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500567  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500834  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:41:39.500852  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetState
	I1030 18:41:39.502063  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:41:39.502076  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:41:39.502081  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:41:39.502086  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.504584  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.504838  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.504860  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.505021  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.505207  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505493  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.505642  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.505855  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.505867  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:41:39.613705  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.613730  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:41:39.613737  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.616442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616787  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.616812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616966  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.617171  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617381  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617494  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.617635  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.617821  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.617831  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:41:39.731009  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:41:39.731096  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:41:39.731110  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:41:39.731120  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731355  400041 buildroot.go:166] provisioning hostname "ha-174833-m03"
	I1030 18:41:39.731385  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731563  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.734727  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735195  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.735225  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735395  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.735599  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735773  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735975  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.736185  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.736419  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.736443  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m03 && echo "ha-174833-m03" | sudo tee /etc/hostname
	I1030 18:41:39.865251  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m03
	
	I1030 18:41:39.865295  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.868277  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868776  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.868811  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868979  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.869210  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869426  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869574  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.869780  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.870007  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.870023  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:41:39.993047  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.993077  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:41:39.993099  400041 buildroot.go:174] setting up certificates
	I1030 18:41:39.993114  400041 provision.go:84] configureAuth start
	I1030 18:41:39.993127  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.993439  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:39.996433  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.996840  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.996869  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.997060  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.000005  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.000450  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000565  400041 provision.go:143] copyHostCerts
	I1030 18:41:40.000594  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000629  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:41:40.000638  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000698  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:41:40.000806  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000825  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:41:40.000831  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000854  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:41:40.000910  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000926  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:41:40.000932  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000953  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:41:40.001003  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m03 san=[127.0.0.1 192.168.39.238 ha-174833-m03 localhost minikube]
	I1030 18:41:40.389110  400041 provision.go:177] copyRemoteCerts
	I1030 18:41:40.389174  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:41:40.389201  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.391720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392157  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.392191  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392466  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.392672  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.392854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.393003  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.485464  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:41:40.485543  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:41:40.513241  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:41:40.513314  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:41:40.537145  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:41:40.537239  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:41:40.562099  400041 provision.go:87] duration metric: took 568.966283ms to configureAuth
	I1030 18:41:40.562136  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:41:40.562357  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:40.562450  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.565158  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565531  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.565563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565700  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.565906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566083  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566192  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.566349  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.566539  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.566554  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:41:40.803791  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:41:40.803826  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:41:40.803835  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetURL
	I1030 18:41:40.805073  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using libvirt version 6000000
	I1030 18:41:40.807111  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.807592  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807738  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:41:40.807756  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:41:40.807765  400041 client.go:171] duration metric: took 28.27447273s to LocalClient.Create
	I1030 18:41:40.807794  400041 start.go:167] duration metric: took 28.274545509s to libmachine.API.Create "ha-174833"
	I1030 18:41:40.807813  400041 start.go:293] postStartSetup for "ha-174833-m03" (driver="kvm2")
	I1030 18:41:40.807829  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:41:40.807854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:40.808083  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:41:40.808112  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.810446  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810781  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.810810  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810951  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.811117  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.811251  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.811374  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.898250  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:41:40.902639  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:41:40.902670  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:41:40.902762  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:41:40.902838  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:41:40.902848  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:41:40.902930  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:41:40.911988  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:40.936666  400041 start.go:296] duration metric: took 128.83333ms for postStartSetup
	I1030 18:41:40.936732  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:40.937356  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:40.939940  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.940406  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940740  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:40.940959  400041 start.go:128] duration metric: took 28.426739922s to createHost
	I1030 18:41:40.940996  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.943340  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943659  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.943683  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943787  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.943992  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944157  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944299  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.944469  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.944647  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.944657  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:41:41.054995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313701.035748365
	
	I1030 18:41:41.055025  400041 fix.go:216] guest clock: 1730313701.035748365
	I1030 18:41:41.055036  400041 fix.go:229] Guest: 2024-10-30 18:41:41.035748365 +0000 UTC Remote: 2024-10-30 18:41:40.940974319 +0000 UTC m=+147.695761890 (delta=94.774046ms)
	I1030 18:41:41.055058  400041 fix.go:200] guest clock delta is within tolerance: 94.774046ms
	I1030 18:41:41.055065  400041 start.go:83] releasing machines lock for "ha-174833-m03", held for 28.541005951s
	I1030 18:41:41.055090  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.055377  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:41.057920  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.058257  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.058278  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.060653  400041 out.go:177] * Found network options:
	I1030 18:41:41.062139  400041 out.go:177]   - NO_PROXY=192.168.39.141,192.168.39.67
	W1030 18:41:41.063472  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.063496  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.063508  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064009  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064221  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064313  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:41:41.064352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	W1030 18:41:41.064451  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.064473  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.064552  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:41:41.064575  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:41.066853  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067199  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067222  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067302  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067479  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067664  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.067724  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067749  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067830  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.067906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067978  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.068065  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.068181  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.068275  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.314636  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:41:41.321102  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:41:41.321173  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:41:41.338442  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:41:41.338470  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:41:41.338554  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:41:41.355526  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:41:41.369752  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:41:41.369824  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:41:41.384658  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:41:41.399117  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:41:41.515988  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:41:41.659854  400041 docker.go:233] disabling docker service ...
	I1030 18:41:41.659940  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:41:41.675386  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:41:41.688521  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:41:41.830998  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:41:41.962743  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:41:41.976734  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:41:41.998554  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:41:41.998635  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.010835  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:41:42.010904  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.022771  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.033993  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.044518  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:41:42.055581  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.065838  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.082685  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.092911  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:41:42.102341  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:41:42.102398  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:41:42.115321  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:41:42.125073  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:42.255762  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:41:42.348340  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:41:42.348402  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:41:42.353645  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:41:42.353700  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:41:42.357362  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:41:42.403194  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:41:42.403278  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.433073  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.461144  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:41:42.462700  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:41:42.464361  400041 out.go:177]   - env NO_PROXY=192.168.39.141,192.168.39.67
	I1030 18:41:42.465724  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:42.468442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.468785  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:42.468812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.469009  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:41:42.473316  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:42.486401  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:41:42.486671  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:42.487004  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.487051  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.503315  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1030 18:41:42.503812  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.504381  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.504403  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.504715  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.504885  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:41:42.506310  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:42.506684  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.506729  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.521795  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I1030 18:41:42.522246  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.522834  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.522857  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.523225  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.523429  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:42.523593  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.238
	I1030 18:41:42.523605  400041 certs.go:194] generating shared ca certs ...
	I1030 18:41:42.523621  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.523781  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:41:42.523832  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:41:42.523846  400041 certs.go:256] generating profile certs ...
	I1030 18:41:42.523984  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:41:42.524022  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7
	I1030 18:41:42.524044  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.238 192.168.39.254]
	I1030 18:41:42.771082  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 ...
	I1030 18:41:42.771143  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7: {Name:mkbb8ab8bf6c18d6d6a31970e3b828800b8fd44f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771350  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 ...
	I1030 18:41:42.771369  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7: {Name:mk93a1175526096093ebe70ea08ba926787709bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771474  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:41:42.771640  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:41:42.771819  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:41:42.771839  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:41:42.771859  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:41:42.771878  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:41:42.771897  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:41:42.771916  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:41:42.771935  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:41:42.771953  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:41:42.786601  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:41:42.786716  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:41:42.786768  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:41:42.786783  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:41:42.786818  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:41:42.786855  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:41:42.786886  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:41:42.786944  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:42.786987  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:41:42.787011  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:42.787031  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:41:42.787082  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:42.790022  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790433  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:42.790463  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790635  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:42.790863  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:42.791005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:42.791117  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:42.862993  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:41:42.869116  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:41:42.881084  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:41:42.885608  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:41:42.896066  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:41:42.900395  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:41:42.911415  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:41:42.915680  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:41:42.926002  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:41:42.929978  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:41:42.939948  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:41:42.944073  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:41:42.954991  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:41:42.979919  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:41:43.004284  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:41:43.027671  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:41:43.050807  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1030 18:41:43.073405  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:41:43.097875  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:41:43.121491  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:41:43.145484  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:41:43.169567  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:41:43.194113  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:41:43.217839  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:41:43.235214  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:41:43.251678  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:41:43.267891  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:41:43.283793  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:41:43.301477  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:41:43.319112  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:41:43.336222  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:41:43.342021  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:41:43.353281  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357881  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357947  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.363573  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:41:43.375497  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:41:43.389049  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393551  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393616  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.399295  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:41:43.411090  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:41:43.422010  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426629  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426687  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.432334  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:41:43.443256  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:41:43.447278  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:41:43.447336  400041 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.2 crio true true} ...
	I1030 18:41:43.447423  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:41:43.447453  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:41:43.447481  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:41:43.463867  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:41:43.463938  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:41:43.463993  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.474999  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:41:43.475044  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.485456  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:41:43.485479  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1030 18:41:43.485533  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485545  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1030 18:41:43.485603  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485621  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:43.504131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504186  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:41:43.504223  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:41:43.504237  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:41:43.504267  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:41:43.522121  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:41:43.522169  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:41:44.375482  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:41:44.387138  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1030 18:41:44.405486  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:41:44.422728  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:41:44.439060  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:41:44.443074  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:44.455364  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:44.570256  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:41:44.588522  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:44.589080  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:44.589146  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:44.605625  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 18:41:44.606088  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:44.606626  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:44.606648  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:44.607023  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:44.607225  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:44.607369  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:41:44.607505  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:41:44.607526  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:44.610554  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611109  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:44.611135  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611433  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:44.611606  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:44.611760  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:44.611885  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:44.773784  400041 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:44.773850  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443"
	I1030 18:42:06.433926  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443": (21.660034767s)
	I1030 18:42:06.433968  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:42:06.995847  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m03 minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:42:07.135527  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:42:07.266435  400041 start.go:319] duration metric: took 22.659060991s to joinCluster
	I1030 18:42:07.266542  400041 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:42:07.266874  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:42:07.267989  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:42:07.269832  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:42:07.538532  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:42:07.566640  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:42:07.566990  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:42:07.567153  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:42:07.567517  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:07.567636  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:07.567647  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:07.567658  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:07.567663  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:07.571044  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.067840  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.067866  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.067875  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.067880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.071548  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.568423  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.568445  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.568456  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.568468  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.572275  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:09.068213  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.068244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.068255  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.068261  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.072412  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.568601  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.568687  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.568704  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.572953  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.573669  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:10.068646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.068674  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.068686  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.068690  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.072592  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:10.568186  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.568212  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.568228  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.568234  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.571345  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:11.068394  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.068419  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.068430  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.068435  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.071353  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:11.568540  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.568569  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.568581  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.568586  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.571615  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.068128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.068184  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.068198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.068204  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.072054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.072920  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:12.568764  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.568788  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.568799  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.568804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.572509  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:13.067810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.067840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.067852  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.067858  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.072370  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:13.568096  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.568118  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.568127  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.568130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.571713  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.068692  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.068715  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.068724  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.068728  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.072113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.073045  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:14.568414  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.568441  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.568458  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.568463  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.571979  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:15.067728  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.067752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.067760  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.067764  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.079108  400041 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1030 18:42:15.568483  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.568509  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.568518  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.568523  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.571981  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.067933  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.067953  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.067962  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.067965  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.071179  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.568646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.568671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.568684  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.568691  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.571923  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.572720  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:17.068520  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.068545  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.068561  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.068566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.072118  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:17.568073  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.568108  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.568118  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.568123  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.571265  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.068409  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.068434  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.068442  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.068447  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.071717  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.568497  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.568527  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.568540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.568546  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.571867  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.067827  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.067850  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.067859  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.067863  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.070951  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.071706  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:19.568087  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.568110  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.568119  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.568122  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.571495  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.068028  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.068053  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.068064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.068071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.071582  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.568136  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.568161  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.568169  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.568174  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.571551  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.068612  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.068640  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.068652  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.068657  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.072026  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.072659  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:21.568033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.568055  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.568064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.568069  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.571332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.067937  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.067961  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.067970  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.067976  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.071718  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.568117  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.568139  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.568147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.568155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.571493  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.068511  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.068548  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.068558  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.068562  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.071664  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.568675  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.568699  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.568707  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.571937  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.572572  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:24.067899  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.067922  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.067931  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.067934  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.071366  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:24.568317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.568342  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.568351  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.568355  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.571501  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.067773  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.067796  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.067803  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.067806  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.071344  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.568753  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.568775  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.568783  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.568787  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.572126  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.572899  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:26.068223  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.068246  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.068257  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.068262  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.072464  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:26.073313  400041 node_ready.go:49] node "ha-174833-m03" has status "Ready":"True"
	I1030 18:42:26.073333  400041 node_ready.go:38] duration metric: took 18.505796326s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:26.073343  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:26.073412  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:26.073421  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.073428  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.073435  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.079519  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:26.085610  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.085695  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:42:26.085704  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.085711  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.085715  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.088406  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.089109  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.089127  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.089137  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.089143  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.091504  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.092047  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.092069  400041 pod_ready.go:82] duration metric: took 6.435195ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092082  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092150  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:42:26.092160  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.092170  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.092179  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.095058  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.095704  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.095720  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.095730  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.095735  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.098085  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.098596  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.098614  400041 pod_ready.go:82] duration metric: took 6.524633ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098625  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098689  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:42:26.098701  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.098708  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.098714  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.101151  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.101737  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.101752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.101762  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.101769  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.103823  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.104381  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.104404  400041 pod_ready.go:82] duration metric: took 5.771643ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104417  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104487  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:42:26.104498  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.104507  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.104515  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.106840  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.107295  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:26.107308  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.107318  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.107325  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.109492  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.109917  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.109932  400041 pod_ready.go:82] duration metric: took 5.508285ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.109947  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.268296  400041 request.go:632] Waited for 158.281409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268393  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268404  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.268413  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.268419  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.272054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.469115  400041 request.go:632] Waited for 196.339916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469175  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469180  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.469190  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.469198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.472781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.473415  400041 pod_ready.go:93] pod "etcd-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.473441  400041 pod_ready.go:82] duration metric: took 363.484662ms for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.473458  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.668901  400041 request.go:632] Waited for 195.3359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669014  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.669026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.669034  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.672627  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.868738  400041 request.go:632] Waited for 195.360312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868832  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.868851  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.868860  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.872228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.872778  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.872812  400041 pod_ready.go:82] duration metric: took 399.338189ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.872828  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.068798  400041 request.go:632] Waited for 195.855457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068879  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068887  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.068898  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.068909  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.072321  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.269235  400041 request.go:632] Waited for 196.216042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269319  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.269343  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.269353  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.272769  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.273439  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.273459  400041 pod_ready.go:82] duration metric: took 400.623063ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.273469  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.468256  400041 request.go:632] Waited for 194.693367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468325  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.468338  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.468347  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.471734  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.669102  400041 request.go:632] Waited for 196.461533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669185  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669197  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.669208  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.669216  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.672818  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.673832  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.673854  400041 pod_ready.go:82] duration metric: took 400.378216ms for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.673876  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.868940  400041 request.go:632] Waited for 194.958773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869030  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869042  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.869053  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.869060  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.872180  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.068264  400041 request.go:632] Waited for 195.290526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068332  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068351  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.068362  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.068370  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.071658  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.072242  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.072265  400041 pod_ready.go:82] duration metric: took 398.381976ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.072276  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.268211  400041 request.go:632] Waited for 195.804533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268292  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268300  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.268311  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.268318  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.271496  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.468870  400041 request.go:632] Waited for 196.361357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468956  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468962  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.468977  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.468987  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.472341  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.472906  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.472925  400041 pod_ready.go:82] duration metric: took 400.642779ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.472940  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.669072  400041 request.go:632] Waited for 196.028852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669156  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669168  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.669179  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.669191  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.673097  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.868210  400041 request.go:632] Waited for 194.307626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868287  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868295  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.868307  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.868338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.871679  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.872327  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.872352  400041 pod_ready.go:82] duration metric: took 399.404321ms for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.872369  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.068267  400041 request.go:632] Waited for 195.816492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068356  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068367  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.068376  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.068388  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.072060  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.269102  400041 request.go:632] Waited for 196.354313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269167  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269172  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.269181  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.269186  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.273078  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.273532  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.273551  400041 pod_ready.go:82] duration metric: took 401.170636ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.273567  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.468616  400041 request.go:632] Waited for 194.925869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468712  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.468722  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.468730  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.472234  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.669266  400041 request.go:632] Waited for 196.242195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669324  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669331  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.669341  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.669348  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.673010  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.674076  400041 pod_ready.go:93] pod "kube-proxy-g7l7z" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.674097  400041 pod_ready.go:82] duration metric: took 400.523192ms for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.674108  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.869286  400041 request.go:632] Waited for 195.064443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869374  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869384  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.869393  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.869397  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.872765  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.068849  400041 request.go:632] Waited for 195.380036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068912  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068917  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.068926  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.068930  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.073076  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:30.073910  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.073931  400041 pod_ready.go:82] duration metric: took 399.816887ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.073942  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.269092  400041 request.go:632] Waited for 195.075688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269158  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269163  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.269171  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.269174  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.272728  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.468827  400041 request.go:632] Waited for 195.469933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468924  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468935  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.468944  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.468948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.472792  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.473256  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.473274  400041 pod_ready.go:82] duration metric: took 399.325616ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.473285  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.668281  400041 request.go:632] Waited for 194.899722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668360  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668369  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.668378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.668386  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.672074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.869270  400041 request.go:632] Waited for 196.355231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869340  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869345  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.869354  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.869361  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.873235  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.873666  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.873686  400041 pod_ready.go:82] duration metric: took 400.39483ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.873697  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.068802  400041 request.go:632] Waited for 195.002943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068869  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068875  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.068884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.068901  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.072579  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.268662  400041 request.go:632] Waited for 195.353177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268730  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268736  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.268743  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.268749  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.272045  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.272702  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:31.272721  400041 pod_ready.go:82] duration metric: took 399.01745ms for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.272733  400041 pod_ready.go:39] duration metric: took 5.199380679s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:31.272749  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:42:31.272802  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:42:31.290132  400041 api_server.go:72] duration metric: took 24.023548522s to wait for apiserver process to appear ...
	I1030 18:42:31.290159  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:42:31.290180  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:42:31.295173  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:42:31.295236  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:42:31.295244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.295252  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.295257  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.296242  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:42:31.296313  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:42:31.296329  400041 api_server.go:131] duration metric: took 6.164986ms to wait for apiserver health ...
	I1030 18:42:31.296336  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:42:31.468748  400041 request.go:632] Waited for 172.312716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468815  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.468822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.468826  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.475257  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:31.481661  400041 system_pods.go:59] 24 kube-system pods found
	I1030 18:42:31.481688  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.481693  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.481699  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.481705  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.481710  400041 system_pods.go:61] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.481715  400041 system_pods.go:61] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.481720  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.481728  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.481733  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.481740  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.481749  400041 system_pods.go:61] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.481754  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.481762  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.481768  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.481776  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.481781  400041 system_pods.go:61] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.481789  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.481794  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.481802  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.481807  400041 system_pods.go:61] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.481814  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.481819  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.481826  400041 system_pods.go:61] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.481832  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.481843  400041 system_pods.go:74] duration metric: took 185.498428ms to wait for pod list to return data ...
	I1030 18:42:31.481856  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:42:31.668606  400041 request.go:632] Waited for 186.6491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668666  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.668679  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.668682  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.672056  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.672194  400041 default_sa.go:45] found service account: "default"
	I1030 18:42:31.672209  400041 default_sa.go:55] duration metric: took 190.344386ms for default service account to be created ...
	I1030 18:42:31.672218  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:42:31.868735  400041 request.go:632] Waited for 196.405115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868808  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868814  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.868822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.868830  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.874347  400041 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 18:42:31.881436  400041 system_pods.go:86] 24 kube-system pods found
	I1030 18:42:31.881470  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.881477  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.881483  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.881487  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.881490  400041 system_pods.go:89] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.881496  400041 system_pods.go:89] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.881501  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.881507  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.881516  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.881521  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.881529  400041 system_pods.go:89] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.881538  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.881547  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.881551  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.881555  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.881559  400041 system_pods.go:89] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.881563  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.881568  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.881574  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.881580  400041 system_pods.go:89] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.881585  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.881589  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.881595  400041 system_pods.go:89] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.881600  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.881612  400041 system_pods.go:126] duration metric: took 209.387873ms to wait for k8s-apps to be running ...
	I1030 18:42:31.881626  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:42:31.881679  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:42:31.897108  400041 system_svc.go:56] duration metric: took 15.46981ms WaitForService to wait for kubelet
	I1030 18:42:31.897150  400041 kubeadm.go:582] duration metric: took 24.630565695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:42:31.897179  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:42:32.068632  400041 request.go:632] Waited for 171.354733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068708  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:32.068716  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:32.068721  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:32.073422  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:32.074348  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074387  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074400  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074404  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074408  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074412  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074421  400041 node_conditions.go:105] duration metric: took 177.235852ms to run NodePressure ...
	I1030 18:42:32.074439  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:42:32.074466  400041 start.go:255] writing updated cluster config ...
	I1030 18:42:32.074805  400041 ssh_runner.go:195] Run: rm -f paused
	I1030 18:42:32.127386  400041 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 18:42:32.129289  400041 out.go:177] * Done! kubectl is now configured to use "ha-174833" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.783772544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313988783692030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2896de6b-1c6b-4c63-93d5-252d0a0791ca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.784558267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02d5f0d7-cc23-4c05-81a7-2ab802d55c3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.784605658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02d5f0d7-cc23-4c05-81a7-2ab802d55c3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.784807451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02d5f0d7-cc23-4c05-81a7-2ab802d55c3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.824777598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79dbbb91-8cbd-420a-9f02-405567496d1b name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.824845984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79dbbb91-8cbd-420a-9f02-405567496d1b name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.826761325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3753f379-03f5-4f77-bae7-bc35d94d00fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.827168400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313988827148144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3753f379-03f5-4f77-bae7-bc35d94d00fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.827884581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d7e3c7d-5526-43ae-a92b-1b1bde715619 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.827934207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d7e3c7d-5526-43ae-a92b-1b1bde715619 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.828143678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d7e3c7d-5526-43ae-a92b-1b1bde715619 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.870045999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5047adb9-cf72-40fc-9575-54ba1aff84ad name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.870119751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5047adb9-cf72-40fc-9575-54ba1aff84ad name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.871420262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ba55c9e-2c97-42ed-ad7b-d495cc6967ce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.872464692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313988872386592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ba55c9e-2c97-42ed-ad7b-d495cc6967ce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.873698955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdfff595-a861-42f6-a11b-ce47592e185c name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.873777052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdfff595-a861-42f6-a11b-ce47592e185c name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.874493198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdfff595-a861-42f6-a11b-ce47592e185c name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.919535195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73d79e8d-e434-493e-b9b8-4820bda0500b name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.919610041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73d79e8d-e434-493e-b9b8-4820bda0500b name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.920504098Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d851a7eb-dfa4-4489-9339-65c05b962108 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.920898839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313988920879334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d851a7eb-dfa4-4489-9339-65c05b962108 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.921417908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39cb378e-6a43-47f3-bf8e-8c124564ea1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.921469326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39cb378e-6a43-47f3-bf8e-8c124564ea1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:28 ha-174833 crio[664]: time="2024-10-30 18:46:28.921666979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39cb378e-6a43-47f3-bf8e-8c124564ea1e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b50f8293a0eac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   4b32508187fed       coredns-7c65d6cfc9-tnj67
	b6694cd6bc9e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     6 minutes ago       Running             storage-provisioner       0                   e4daca50f6e1c       storage-provisioner
	80919506252b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   80f0d2bac7bdb       coredns-7c65d6cfc9-qrkkc
	46301d1401a14       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16   6 minutes ago       Running             kindnet-cni               0                   4a4a82673e78f       kindnet-pm48g
	634060e657ba2       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                     6 minutes ago       Running             kube-proxy                0                   5d414abeb9a8e       kube-proxy-2qt2n
	da8b9126272c4       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215    6 minutes ago       Running             kube-vip                  0                   635aa65f78ff8       kube-vip-ha-174833
	6f0fb508f1f86       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                     6 minutes ago       Running             kube-scheduler            0                   2a80897d4d698       kube-scheduler-ha-174833
	db863ebdc17e0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                     6 minutes ago       Running             kube-controller-manager   0                   bc13396acc704       kube-controller-manager-ha-174833
	381be95e92ca6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     6 minutes ago       Running             etcd                      0                   aa574b692710d       etcd-ha-174833
	661ed7108dbf5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                     6 minutes ago       Running             kube-apiserver            0                   a4e686c5a4e05       kube-apiserver-ha-174833
	
	
	==> coredns [80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f] <==
	[INFO] 10.244.2.2:49872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260615s
	[INFO] 10.244.2.2:45985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000215389s
	[INFO] 10.244.1.3:58699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184263s
	[INFO] 10.244.1.3:36745 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223993s
	[INFO] 10.244.1.3:52696 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197445s
	[INFO] 10.244.1.3:51136 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008496656s
	[INFO] 10.244.1.3:37326 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170193s
	[INFO] 10.244.2.2:41356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001504514s
	[INFO] 10.244.2.2:58448 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121598s
	[INFO] 10.244.2.2:57683 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115706s
	[INFO] 10.244.1.2:44356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773314s
	[INFO] 10.244.1.2:53338 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092182s
	[INFO] 10.244.1.2:36505 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123936s
	[INFO] 10.244.1.2:50770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129391s
	[INFO] 10.244.1.3:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119608s
	[INFO] 10.244.1.3:38056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104793s
	[INFO] 10.244.2.2:56050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001014289s
	[INFO] 10.244.2.2:46354 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094957s
	[INFO] 10.244.1.2:43247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140652s
	[INFO] 10.244.1.3:59260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286102s
	[INFO] 10.244.1.3:42613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177355s
	[INFO] 10.244.2.2:38778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139553s
	[INFO] 10.244.2.2:55445 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162449s
	[INFO] 10.244.1.2:49123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103971s
	[INFO] 10.244.1.2:36025 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103655s
	
	
	==> coredns [b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009] <==
	[INFO] 10.244.1.3:35936 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006730126s
	[INFO] 10.244.1.3:52049 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164529s
	[INFO] 10.244.1.3:41429 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145894s
	[INFO] 10.244.2.2:38865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015631s
	[INFO] 10.244.2.2:35468 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001359248s
	[INFO] 10.244.2.2:39539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154504s
	[INFO] 10.244.2.2:40996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012336s
	[INFO] 10.244.2.2:36394 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103847s
	[INFO] 10.244.1.2:36748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157155s
	[INFO] 10.244.1.2:57168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183772s
	[INFO] 10.244.1.2:44765 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001208743s
	[INFO] 10.244.1.2:51648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094986s
	[INFO] 10.244.1.3:35468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117052s
	[INFO] 10.244.1.3:41666 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093918s
	[INFO] 10.244.2.2:40566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179128s
	[INFO] 10.244.2.2:35306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086624s
	[INFO] 10.244.1.2:54037 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136664s
	[INFO] 10.244.1.2:39370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109182s
	[INFO] 10.244.1.2:41814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123818s
	[INFO] 10.244.1.3:44728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170139s
	[INFO] 10.244.1.3:56805 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142203s
	[INFO] 10.244.2.2:36863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187523s
	[INFO] 10.244.2.2:41661 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120093s
	[INFO] 10.244.1.2:52634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137066s
	[INFO] 10.244.1.2:35418 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120994s
	
	
	==> describe nodes <==
	Name:               ha-174833
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:40:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    ha-174833
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ccc5c9f42c54438b6652723644bbeef
	  System UUID:                7ccc5c9f-42c5-4438-b665-2723644bbeef
	  Boot ID:                    83dbe7e6-9d54-44c7-aa42-e17dc8d9a1a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-qrkkc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m33s
	  kube-system                 coredns-7c65d6cfc9-tnj67             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m33s
	  kube-system                 etcd-ha-174833                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m37s
	  kube-system                 kindnet-pm48g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m33s
	  kube-system                 kube-apiserver-ha-174833             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-174833    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-2qt2n                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-scheduler-ha-174833             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-174833                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m44s (x7 over 6m44s)  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m44s (x8 over 6m44s)  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x8 over 6m44s)  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s                  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s                  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s                  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  NodeReady                6m15s                  kubelet          Node ha-174833 status is now: NodeReady
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	
	
	Name:               ha-174833-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:40:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:43:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-174833-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44df5dbbd2d444bb8a426278602ee677
	  System UUID:                44df5dbb-d2d4-44bb-8a42-6278602ee677
	  Boot ID:                    360af464-681d-4348-b7f8-dd08e7d88924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mm586                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  default                     busybox-7dff88458-v6kn9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-174833-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m43s
	  kube-system                 kindnet-rlzbn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m45s
	  kube-system                 kube-apiserver-ha-174833-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-controller-manager-ha-174833-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-proxy-hg2st                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-scheduler-ha-174833-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-vip-ha-174833-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m45s (x8 over 5m45s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m45s (x8 over 5m45s)  kubelet          Node ha-174833-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x7 over 5m45s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m44s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-174833-m02 status is now: NodeNotReady
	
	
	Name:               ha-174833-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:42:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-174833-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a25aeed7bbc4bd4a357771ce914b28b
	  System UUID:                8a25aeed-7bbc-4bd4-a357-771ce914b28b
	  Boot ID:                    3552b03e-4535-4240-8adc-99b111c48f7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rzbbm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-174833-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m25s
	  kube-system                 kindnet-b76pd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-apiserver-ha-174833-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-174833-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-g7l7z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-scheduler-ha-174833-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-vip-ha-174833-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-174833-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	
	
	Name:               ha-174833-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_43_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:43:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-174833-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 65b27c1ce02d45b78ed3fcddd1aae236
	  System UUID:                65b27c1c-e02d-45b7-8ed3-fcddd1aae236
	  Boot ID:                    25699951-947c-4e74-aa23-b7f7f9d75023
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2dhq5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m15s
	  kube-system                 kube-proxy-nzl42    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m10s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m15s                  cidrAllocator    Node ha-174833-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m15s (x2 over 3m15s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x2 over 3m15s)  kubelet          Node ha-174833-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x2 over 3m15s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-174833-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct30 18:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050141] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040202] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.508080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580074] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.619811] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059036] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050086] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.189200] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.106863] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.256172] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.944359] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.089078] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.056939] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.232740] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.917340] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +5.757118] kauditd_printk_skb: 23 callbacks suppressed
	[Oct30 18:40] kauditd_printk_skb: 32 callbacks suppressed
	[ +47.325044] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c] <==
	{"level":"warn","ts":"2024-10-30T18:46:29.153417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.172846Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.178468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.187309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.191689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.198945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.205359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.211569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.214827Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.217967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.228649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.234280Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.239596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.243364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.246329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.251082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.256362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.261784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.262806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.265685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.268273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.271350Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.271964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.277111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:29.282702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:46:29 up 7 min,  0 users,  load average: 0.20, 0.34, 0.20
	Linux ha-174833 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef] <==
	I1030 18:45:54.322683       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:04.313396       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:04.313498       1 main.go:301] handling current node
	I1030 18:46:04.313526       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:04.313545       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:04.313781       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:04.313810       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:04.313989       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:04.314019       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:14.313413       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:14.313476       1 main.go:301] handling current node
	I1030 18:46:14.313504       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:14.313513       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:14.313806       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:14.313832       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:14.314013       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:14.314036       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:24.319131       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:24.319165       1 main.go:301] handling current node
	I1030 18:46:24.319180       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:24.319184       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:24.319509       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:24.319534       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:24.319684       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:24.319708       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb] <==
	I1030 18:39:50.264612       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 18:39:50.401162       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1030 18:39:50.407669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.141]
	I1030 18:39:50.408487       1 controller.go:615] quota admission added evaluator for: endpoints
	I1030 18:39:50.417171       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 18:39:50.434785       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1030 18:39:51.992504       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1030 18:39:52.038007       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1030 18:39:52.050097       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1030 18:39:55.887886       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1030 18:39:56.039666       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1030 18:42:42.298130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41446: use of closed network connection
	E1030 18:42:42.500141       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41460: use of closed network connection
	E1030 18:42:42.681190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41478: use of closed network connection
	E1030 18:42:42.876163       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41496: use of closed network connection
	E1030 18:42:43.053880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41524: use of closed network connection
	E1030 18:42:43.422726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41570: use of closed network connection
	E1030 18:42:43.605703       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41578: use of closed network connection
	E1030 18:42:43.785641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41594: use of closed network connection
	E1030 18:42:44.079143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41622: use of closed network connection
	E1030 18:42:44.278108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41630: use of closed network connection
	E1030 18:42:44.464009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41654: use of closed network connection
	E1030 18:42:44.647039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41670: use of closed network connection
	E1030 18:42:44.825565       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41686: use of closed network connection
	E1030 18:42:45.007583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41704: use of closed network connection
	
	
	==> kube-controller-manager [db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73] <==
	I1030 18:43:14.768963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:14.886660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.225099       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174833-m04"
	I1030 18:43:15.270413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.350905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.242429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.306242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.754966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.845608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:24.906507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.742819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.743714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:43:35.758129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:37.268796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:45.220918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:44:30.252088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.252535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:44:30.280327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.294546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.947854ms"
	I1030 18:44:30.294861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.928µs"
	I1030 18:44:30.441730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.437828ms"
	I1030 18:44:30.442971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="183.461µs"
	I1030 18:44:32.399995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:35.500584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:45:28.632096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833"
	
	
	==> kube-proxy [634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 18:39:57.657528       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 18:39:57.672099       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1030 18:39:57.672270       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 18:39:57.707431       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 18:39:57.707476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 18:39:57.707498       1 server_linux.go:169] "Using iptables Proxier"
	I1030 18:39:57.710062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 18:39:57.710384       1 server.go:483] "Version info" version="v1.31.2"
	I1030 18:39:57.710412       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 18:39:57.711719       1 config.go:199] "Starting service config controller"
	I1030 18:39:57.711756       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 18:39:57.711783       1 config.go:105] "Starting endpoint slice config controller"
	I1030 18:39:57.711787       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 18:39:57.712612       1 config.go:328] "Starting node config controller"
	I1030 18:39:57.712701       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 18:39:57.812186       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 18:39:57.812427       1 shared_informer.go:320] Caches are synced for service config
	I1030 18:39:57.813054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6] <==
	W1030 18:39:49.816172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 18:39:49.816268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.949917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 18:39:49.949971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.991072       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 18:39:49.991150       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1030 18:39:52.691806       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1030 18:42:33.022088       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mm586" node="ha-174833-m03"
	E1030 18:42:33.022366       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" pod="default/busybox-7dff88458-mm586"
	E1030 18:43:14.801891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.807808       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3291acf1-7798-4998-95fd-5094835e017f(kube-system/kube-proxy-nzl42) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nzl42"
	E1030 18:43:14.807930       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-nzl42"
	I1030 18:43:14.809848       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.810858       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.814494       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3144d47c-0cef-414b-b657-6a3c10ada751(kube-system/kindnet-ptwbp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ptwbp"
	E1030 18:43:14.814760       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-ptwbp"
	I1030 18:43:14.814869       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.859158       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.859832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51293c2a-e424-4d2b-a692-1d8df3e4eb88(kube-system/kube-proxy-vp4bf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vp4bf"
	E1030 18:43:14.860153       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-vp4bf"
	I1030 18:43:14.860458       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.864834       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	E1030 18:43:14.866342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3cf9c20d-84c1-4bd6-8f34-453bee8cc673(kube-system/kindnet-dsxh6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dsxh6"
	E1030 18:43:14.866529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-dsxh6"
	I1030 18:43:14.866552       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	
	
	==> kubelet <==
	Oct 30 18:44:52 ha-174833 kubelet[1302]: E1030 18:44:52.044104    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313892043714010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:44:52 ha-174833 kubelet[1302]: E1030 18:44:52.044143    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313892043714010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047183    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047499    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.048946    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.049303    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050794    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050834    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053552    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053658    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.055784    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.056077    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:51 ha-174833 kubelet[1302]: E1030 18:45:51.922951    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058449    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058518    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060855    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060895    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062294    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062632    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:22 ha-174833 kubelet[1302]: E1030 18:46:22.064946    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982064558351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:22 ha-174833 kubelet[1302]: E1030 18:46:22.064979    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982064558351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174833 -n ha-174833
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174833 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.065672054s)
ha_test.go:309: expected profile "ha-174833" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-174833\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-174833\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-174833\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.141\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.67\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.238\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.123\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\
"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174833 -n ha-174833
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 logs -n 25: (1.37390499s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m03_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m04 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp testdata/cp-test.txt                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m04_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03:/home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m03 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174833 node stop m02 -v=7                                                     | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-174833 node start m02 -v=7                                                    | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:39:13
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:39:13.284465  400041 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:39:13.284583  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284591  400041 out.go:358] Setting ErrFile to fd 2...
	I1030 18:39:13.284596  400041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:39:13.284767  400041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:39:13.285341  400041 out.go:352] Setting JSON to false
	I1030 18:39:13.286279  400041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8496,"bootTime":1730305057,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:39:13.286383  400041 start.go:139] virtualization: kvm guest
	I1030 18:39:13.288640  400041 out.go:177] * [ha-174833] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:39:13.290653  400041 notify.go:220] Checking for updates...
	I1030 18:39:13.290717  400041 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:39:13.292349  400041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:39:13.293858  400041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:13.295309  400041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.296710  400041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:39:13.298107  400041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:39:13.299548  400041 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:39:13.333903  400041 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 18:39:13.335174  400041 start.go:297] selected driver: kvm2
	I1030 18:39:13.335194  400041 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:39:13.335206  400041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:39:13.335896  400041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.336007  400041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:39:13.350868  400041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:39:13.350946  400041 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:39:13.351232  400041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:39:13.351271  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:13.351324  400041 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1030 18:39:13.351332  400041 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 18:39:13.351398  400041 start.go:340] cluster config:
	{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:13.351547  400041 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:39:13.353340  400041 out.go:177] * Starting "ha-174833" primary control-plane node in "ha-174833" cluster
	I1030 18:39:13.354531  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:13.354568  400041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:39:13.354580  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:13.354663  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:13.354676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:13.355016  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:13.355043  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json: {Name:mkc5b46cd8e85bcdd2d75c56d8807d384c7babe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:13.355179  400041 start.go:360] acquireMachinesLock for ha-174833: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:13.355220  400041 start.go:364] duration metric: took 25.55µs to acquireMachinesLock for "ha-174833"
	I1030 18:39:13.355242  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:13.355302  400041 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 18:39:13.356866  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:13.357003  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:13.357058  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:13.371132  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I1030 18:39:13.371590  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:13.372159  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:13.372180  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:13.372504  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:13.372689  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:13.372808  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:13.372956  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:13.372989  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:13.373021  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:13.373056  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373078  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373144  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:13.373168  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:13.373183  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:13.373207  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:13.373219  400041 main.go:141] libmachine: (ha-174833) Calling .PreCreateCheck
	I1030 18:39:13.373569  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:13.373996  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:13.374012  400041 main.go:141] libmachine: (ha-174833) Calling .Create
	I1030 18:39:13.374145  400041 main.go:141] libmachine: (ha-174833) Creating KVM machine...
	I1030 18:39:13.375320  400041 main.go:141] libmachine: (ha-174833) DBG | found existing default KVM network
	I1030 18:39:13.375998  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.375838  400064 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1030 18:39:13.376021  400041 main.go:141] libmachine: (ha-174833) DBG | created network xml: 
	I1030 18:39:13.376034  400041 main.go:141] libmachine: (ha-174833) DBG | <network>
	I1030 18:39:13.376048  400041 main.go:141] libmachine: (ha-174833) DBG |   <name>mk-ha-174833</name>
	I1030 18:39:13.376057  400041 main.go:141] libmachine: (ha-174833) DBG |   <dns enable='no'/>
	I1030 18:39:13.376066  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376076  400041 main.go:141] libmachine: (ha-174833) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1030 18:39:13.376085  400041 main.go:141] libmachine: (ha-174833) DBG |     <dhcp>
	I1030 18:39:13.376097  400041 main.go:141] libmachine: (ha-174833) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1030 18:39:13.376112  400041 main.go:141] libmachine: (ha-174833) DBG |     </dhcp>
	I1030 18:39:13.376121  400041 main.go:141] libmachine: (ha-174833) DBG |   </ip>
	I1030 18:39:13.376134  400041 main.go:141] libmachine: (ha-174833) DBG |   
	I1030 18:39:13.376145  400041 main.go:141] libmachine: (ha-174833) DBG | </network>
	I1030 18:39:13.376153  400041 main.go:141] libmachine: (ha-174833) DBG | 
	I1030 18:39:13.380994  400041 main.go:141] libmachine: (ha-174833) DBG | trying to create private KVM network mk-ha-174833 192.168.39.0/24...
	I1030 18:39:13.444397  400041 main.go:141] libmachine: (ha-174833) DBG | private KVM network mk-ha-174833 192.168.39.0/24 created
	I1030 18:39:13.444439  400041 main.go:141] libmachine: (ha-174833) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.444454  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.444367  400064 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.444474  400041 main.go:141] libmachine: (ha-174833) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:13.444565  400041 main.go:141] libmachine: (ha-174833) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:13.725521  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.725350  400064 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa...
	I1030 18:39:13.832228  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832066  400064 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk...
	I1030 18:39:13.832262  400041 main.go:141] libmachine: (ha-174833) DBG | Writing magic tar header
	I1030 18:39:13.832279  400041 main.go:141] libmachine: (ha-174833) DBG | Writing SSH key tar header
	I1030 18:39:13.832291  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:13.832203  400064 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 ...
	I1030 18:39:13.832302  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833
	I1030 18:39:13.832373  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833 (perms=drwx------)
	I1030 18:39:13.832401  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:13.832414  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:13.832431  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:13.832442  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:13.832452  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:13.832462  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:13.832473  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:13.832490  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:13.832506  400041 main.go:141] libmachine: (ha-174833) DBG | Checking permissions on dir: /home
	I1030 18:39:13.832517  400041 main.go:141] libmachine: (ha-174833) DBG | Skipping /home - not owner
	I1030 18:39:13.832528  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:13.832538  400041 main.go:141] libmachine: (ha-174833) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:13.832550  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:13.833717  400041 main.go:141] libmachine: (ha-174833) define libvirt domain using xml: 
	I1030 18:39:13.833738  400041 main.go:141] libmachine: (ha-174833) <domain type='kvm'>
	I1030 18:39:13.833744  400041 main.go:141] libmachine: (ha-174833)   <name>ha-174833</name>
	I1030 18:39:13.833752  400041 main.go:141] libmachine: (ha-174833)   <memory unit='MiB'>2200</memory>
	I1030 18:39:13.833758  400041 main.go:141] libmachine: (ha-174833)   <vcpu>2</vcpu>
	I1030 18:39:13.833762  400041 main.go:141] libmachine: (ha-174833)   <features>
	I1030 18:39:13.833766  400041 main.go:141] libmachine: (ha-174833)     <acpi/>
	I1030 18:39:13.833770  400041 main.go:141] libmachine: (ha-174833)     <apic/>
	I1030 18:39:13.833774  400041 main.go:141] libmachine: (ha-174833)     <pae/>
	I1030 18:39:13.833794  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.833807  400041 main.go:141] libmachine: (ha-174833)   </features>
	I1030 18:39:13.833814  400041 main.go:141] libmachine: (ha-174833)   <cpu mode='host-passthrough'>
	I1030 18:39:13.833838  400041 main.go:141] libmachine: (ha-174833)   
	I1030 18:39:13.833857  400041 main.go:141] libmachine: (ha-174833)   </cpu>
	I1030 18:39:13.833863  400041 main.go:141] libmachine: (ha-174833)   <os>
	I1030 18:39:13.833868  400041 main.go:141] libmachine: (ha-174833)     <type>hvm</type>
	I1030 18:39:13.833884  400041 main.go:141] libmachine: (ha-174833)     <boot dev='cdrom'/>
	I1030 18:39:13.833888  400041 main.go:141] libmachine: (ha-174833)     <boot dev='hd'/>
	I1030 18:39:13.833904  400041 main.go:141] libmachine: (ha-174833)     <bootmenu enable='no'/>
	I1030 18:39:13.833912  400041 main.go:141] libmachine: (ha-174833)   </os>
	I1030 18:39:13.833917  400041 main.go:141] libmachine: (ha-174833)   <devices>
	I1030 18:39:13.833922  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='cdrom'>
	I1030 18:39:13.834007  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/boot2docker.iso'/>
	I1030 18:39:13.834043  400041 main.go:141] libmachine: (ha-174833)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:13.834066  400041 main.go:141] libmachine: (ha-174833)       <readonly/>
	I1030 18:39:13.834080  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834092  400041 main.go:141] libmachine: (ha-174833)     <disk type='file' device='disk'>
	I1030 18:39:13.834107  400041 main.go:141] libmachine: (ha-174833)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:13.834134  400041 main.go:141] libmachine: (ha-174833)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/ha-174833.rawdisk'/>
	I1030 18:39:13.834146  400041 main.go:141] libmachine: (ha-174833)       <target dev='hda' bus='virtio'/>
	I1030 18:39:13.834163  400041 main.go:141] libmachine: (ha-174833)     </disk>
	I1030 18:39:13.834179  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834191  400041 main.go:141] libmachine: (ha-174833)       <source network='mk-ha-174833'/>
	I1030 18:39:13.834199  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834204  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834213  400041 main.go:141] libmachine: (ha-174833)     <interface type='network'>
	I1030 18:39:13.834219  400041 main.go:141] libmachine: (ha-174833)       <source network='default'/>
	I1030 18:39:13.834228  400041 main.go:141] libmachine: (ha-174833)       <model type='virtio'/>
	I1030 18:39:13.834233  400041 main.go:141] libmachine: (ha-174833)     </interface>
	I1030 18:39:13.834244  400041 main.go:141] libmachine: (ha-174833)     <serial type='pty'>
	I1030 18:39:13.834261  400041 main.go:141] libmachine: (ha-174833)       <target port='0'/>
	I1030 18:39:13.834275  400041 main.go:141] libmachine: (ha-174833)     </serial>
	I1030 18:39:13.834287  400041 main.go:141] libmachine: (ha-174833)     <console type='pty'>
	I1030 18:39:13.834295  400041 main.go:141] libmachine: (ha-174833)       <target type='serial' port='0'/>
	I1030 18:39:13.834310  400041 main.go:141] libmachine: (ha-174833)     </console>
	I1030 18:39:13.834320  400041 main.go:141] libmachine: (ha-174833)     <rng model='virtio'>
	I1030 18:39:13.834333  400041 main.go:141] libmachine: (ha-174833)       <backend model='random'>/dev/random</backend>
	I1030 18:39:13.834342  400041 main.go:141] libmachine: (ha-174833)     </rng>
	I1030 18:39:13.834351  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834359  400041 main.go:141] libmachine: (ha-174833)     
	I1030 18:39:13.834368  400041 main.go:141] libmachine: (ha-174833)   </devices>
	I1030 18:39:13.834377  400041 main.go:141] libmachine: (ha-174833) </domain>
	I1030 18:39:13.834388  400041 main.go:141] libmachine: (ha-174833) 
	I1030 18:39:13.838852  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:67:40:5d in network default
	I1030 18:39:13.839421  400041 main.go:141] libmachine: (ha-174833) Ensuring networks are active...
	I1030 18:39:13.839441  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:13.840041  400041 main.go:141] libmachine: (ha-174833) Ensuring network default is active
	I1030 18:39:13.840342  400041 main.go:141] libmachine: (ha-174833) Ensuring network mk-ha-174833 is active
	I1030 18:39:13.840783  400041 main.go:141] libmachine: (ha-174833) Getting domain xml...
	I1030 18:39:13.841490  400041 main.go:141] libmachine: (ha-174833) Creating domain...
	I1030 18:39:15.028258  400041 main.go:141] libmachine: (ha-174833) Waiting to get IP...
	I1030 18:39:15.029201  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.029564  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.029614  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.029561  400064 retry.go:31] will retry after 241.896456ms: waiting for machine to come up
	I1030 18:39:15.272995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.273461  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.273488  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.273413  400064 retry.go:31] will retry after 260.838664ms: waiting for machine to come up
	I1030 18:39:15.535845  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:15.536295  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:15.536316  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:15.536255  400064 retry.go:31] will retry after 479.733534ms: waiting for machine to come up
	I1030 18:39:16.017897  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.018269  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.018294  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.018228  400064 retry.go:31] will retry after 392.371571ms: waiting for machine to come up
	I1030 18:39:16.412626  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:16.413050  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:16.413080  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:16.412991  400064 retry.go:31] will retry after 692.689396ms: waiting for machine to come up
	I1030 18:39:17.106954  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.107478  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.107955  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.107422  400064 retry.go:31] will retry after 832.987361ms: waiting for machine to come up
	I1030 18:39:17.942300  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:17.942709  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:17.942756  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:17.942670  400064 retry.go:31] will retry after 1.191938703s: waiting for machine to come up
	I1030 18:39:19.135752  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:19.136105  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:19.136132  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:19.136082  400064 retry.go:31] will retry after 978.475739ms: waiting for machine to come up
	I1030 18:39:20.116239  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:20.116734  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:20.116762  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:20.116673  400064 retry.go:31] will retry after 1.671512667s: waiting for machine to come up
	I1030 18:39:21.790628  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:21.791129  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:21.791157  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:21.791069  400064 retry.go:31] will retry after 2.145808112s: waiting for machine to come up
	I1030 18:39:23.938308  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:23.938724  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:23.938750  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:23.938677  400064 retry.go:31] will retry after 2.206607406s: waiting for machine to come up
	I1030 18:39:26.148104  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:26.148464  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:26.148498  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:26.148437  400064 retry.go:31] will retry after 3.57155807s: waiting for machine to come up
	I1030 18:39:29.721895  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:29.722283  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find current IP address of domain ha-174833 in network mk-ha-174833
	I1030 18:39:29.722306  400041 main.go:141] libmachine: (ha-174833) DBG | I1030 18:39:29.722235  400064 retry.go:31] will retry after 4.087469223s: waiting for machine to come up
	I1030 18:39:33.811039  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811489  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has current primary IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.811515  400041 main.go:141] libmachine: (ha-174833) Found IP for machine: 192.168.39.141
	I1030 18:39:33.811524  400041 main.go:141] libmachine: (ha-174833) Reserving static IP address...
	I1030 18:39:33.811821  400041 main.go:141] libmachine: (ha-174833) DBG | unable to find host DHCP lease matching {name: "ha-174833", mac: "52:54:00:fd:5e:ca", ip: "192.168.39.141"} in network mk-ha-174833
	I1030 18:39:33.884143  400041 main.go:141] libmachine: (ha-174833) Reserved static IP address: 192.168.39.141
	I1030 18:39:33.884173  400041 main.go:141] libmachine: (ha-174833) DBG | Getting to WaitForSSH function...
	I1030 18:39:33.884180  400041 main.go:141] libmachine: (ha-174833) Waiting for SSH to be available...
	I1030 18:39:33.886594  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.886971  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:33.886995  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:33.887140  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH client type: external
	I1030 18:39:33.887229  400041 main.go:141] libmachine: (ha-174833) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa (-rw-------)
	I1030 18:39:33.887264  400041 main.go:141] libmachine: (ha-174833) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:39:33.887276  400041 main.go:141] libmachine: (ha-174833) DBG | About to run SSH command:
	I1030 18:39:33.887284  400041 main.go:141] libmachine: (ha-174833) DBG | exit 0
	I1030 18:39:34.010284  400041 main.go:141] libmachine: (ha-174833) DBG | SSH cmd err, output: <nil>: 
	I1030 18:39:34.010612  400041 main.go:141] libmachine: (ha-174833) KVM machine creation complete!
	I1030 18:39:34.010940  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:34.011543  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011721  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:34.011891  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:39:34.011905  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:34.013168  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:39:34.013181  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:39:34.013186  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:39:34.013192  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.015485  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015842  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.015874  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.015997  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.016168  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016323  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.016452  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.016738  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.016961  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.016974  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:39:34.117708  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.117732  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:39:34.117739  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.120384  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120816  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.120860  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.120990  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.121177  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121322  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.121422  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.121534  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.121721  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.121734  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:39:34.222936  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:39:34.223027  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:39:34.223040  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:39:34.223052  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223321  400041 buildroot.go:166] provisioning hostname "ha-174833"
	I1030 18:39:34.223356  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.223546  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.225998  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226300  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.226323  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.226503  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.226662  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226803  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.226914  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.227040  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.227266  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.227279  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833 && echo "ha-174833" | sudo tee /etc/hostname
	I1030 18:39:34.340995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:39:34.341029  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.343841  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344138  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.344167  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.344368  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.344558  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344679  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.344790  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.344900  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.345070  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.345090  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:39:34.455073  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:39:34.455103  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:39:34.455126  400041 buildroot.go:174] setting up certificates
	I1030 18:39:34.455146  400041 provision.go:84] configureAuth start
	I1030 18:39:34.455156  400041 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:39:34.455453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:34.458160  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458507  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.458546  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.458737  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.461111  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461454  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.461482  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.461548  400041 provision.go:143] copyHostCerts
	I1030 18:39:34.461581  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461633  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:39:34.461648  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:39:34.461713  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:39:34.461793  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461811  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:39:34.461816  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:39:34.461840  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:39:34.461880  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461896  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:39:34.461902  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:39:34.461922  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:39:34.461968  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833 san=[127.0.0.1 192.168.39.141 ha-174833 localhost minikube]
	I1030 18:39:34.715502  400041 provision.go:177] copyRemoteCerts
	I1030 18:39:34.715567  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:39:34.715593  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.718337  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718724  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.718750  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.718905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.719124  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.719316  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.719438  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:34.802134  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:39:34.802247  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:39:34.830405  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:39:34.830495  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:39:34.853312  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:39:34.853400  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1030 18:39:34.876622  400041 provision.go:87] duration metric: took 421.460858ms to configureAuth
	I1030 18:39:34.876654  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:39:34.876860  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:34.876973  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:34.879465  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.879875  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:34.879918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:34.880033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:34.880249  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880401  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:34.880547  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:34.880711  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:34.880901  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:34.880922  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:39:35.107739  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:39:35.107767  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:39:35.107789  400041 main.go:141] libmachine: (ha-174833) Calling .GetURL
	I1030 18:39:35.109044  400041 main.go:141] libmachine: (ha-174833) DBG | Using libvirt version 6000000
	I1030 18:39:35.111179  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111531  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.111555  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.111678  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:39:35.111690  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:39:35.111698  400041 client.go:171] duration metric: took 21.738698891s to LocalClient.Create
	I1030 18:39:35.111719  400041 start.go:167] duration metric: took 21.738765345s to libmachine.API.Create "ha-174833"
	I1030 18:39:35.111730  400041 start.go:293] postStartSetup for "ha-174833" (driver="kvm2")
	I1030 18:39:35.111740  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:39:35.111756  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.111994  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:39:35.112024  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.114247  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114535  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.114564  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.114645  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.114802  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.114905  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.115037  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.197105  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:39:35.201419  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:39:35.201446  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:39:35.201521  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:39:35.201638  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:39:35.201653  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:39:35.201776  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:39:35.211530  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:35.234121  400041 start.go:296] duration metric: took 122.377861ms for postStartSetup
	I1030 18:39:35.234182  400041 main.go:141] libmachine: (ha-174833) Calling .GetConfigRaw
	I1030 18:39:35.234814  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.237333  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237649  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.237675  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.237930  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:35.238105  400041 start.go:128] duration metric: took 21.882791468s to createHost
	I1030 18:39:35.238129  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.240449  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240793  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.240819  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.240925  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.241105  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241241  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.241360  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.241504  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:39:35.241675  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:39:35.241684  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:39:35.343143  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313575.316321849
	
	I1030 18:39:35.343172  400041 fix.go:216] guest clock: 1730313575.316321849
	I1030 18:39:35.343179  400041 fix.go:229] Guest: 2024-10-30 18:39:35.316321849 +0000 UTC Remote: 2024-10-30 18:39:35.238116722 +0000 UTC m=+21.992904276 (delta=78.205127ms)
	I1030 18:39:35.343224  400041 fix.go:200] guest clock delta is within tolerance: 78.205127ms
	I1030 18:39:35.343236  400041 start.go:83] releasing machines lock for "ha-174833", held for 21.988006549s
	I1030 18:39:35.343264  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.343537  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:35.345918  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346202  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.346227  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.346384  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.346845  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347029  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:35.347110  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:39:35.347154  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.347263  400041 ssh_runner.go:195] Run: cat /version.json
	I1030 18:39:35.347290  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:35.349953  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350154  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350349  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350372  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350476  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:35.350518  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:35.350532  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350712  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.350796  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:35.350983  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:35.351121  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.351158  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:35.351287  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:35.446752  400041 ssh_runner.go:195] Run: systemctl --version
	I1030 18:39:35.452799  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:39:35.607404  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:39:35.613689  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:39:35.613765  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:39:35.629322  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:39:35.629356  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:39:35.629426  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:39:35.645369  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:39:35.659484  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:39:35.659560  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:39:35.673617  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:39:35.686829  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:39:35.798982  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:39:35.961093  400041 docker.go:233] disabling docker service ...
	I1030 18:39:35.961203  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:39:35.975451  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:39:35.987814  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:39:36.096019  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:39:36.200364  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:39:36.213767  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:39:36.231649  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:39:36.231720  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.241504  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:39:36.241612  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.251200  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.260995  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.270677  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:39:36.280585  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.290337  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.306289  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:39:36.316095  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:39:36.325059  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:39:36.325116  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:39:36.338276  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:39:36.347428  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:36.458431  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:39:36.549399  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:39:36.549481  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:39:36.554177  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:39:36.554235  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:39:36.557819  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:39:36.597751  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:39:36.597863  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.625326  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:39:36.656926  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:39:36.658453  400041 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:39:36.661076  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661520  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:36.661551  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:36.661753  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:39:36.665623  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:36.678283  400041 kubeadm.go:883] updating cluster {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:39:36.678415  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:36.678476  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:36.710390  400041 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 18:39:36.710476  400041 ssh_runner.go:195] Run: which lz4
	I1030 18:39:36.714335  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1030 18:39:36.714421  400041 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 18:39:36.718401  400041 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 18:39:36.718426  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 18:39:37.991420  400041 crio.go:462] duration metric: took 1.277020496s to copy over tarball
	I1030 18:39:37.991500  400041 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 18:39:40.058678  400041 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.067148582s)
	I1030 18:39:40.058707  400041 crio.go:469] duration metric: took 2.067258506s to extract the tarball
	I1030 18:39:40.058717  400041 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 18:39:40.095680  400041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:39:40.139024  400041 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:39:40.139051  400041 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:39:40.139060  400041 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.2 crio true true} ...
	I1030 18:39:40.139194  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:39:40.139268  400041 ssh_runner.go:195] Run: crio config
	I1030 18:39:40.182736  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:40.182762  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:40.182776  400041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:39:40.182809  400041 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174833 NodeName:ha-174833 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:39:40.182965  400041 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174833"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:39:40.182991  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:39:40.183041  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:39:40.198922  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:39:40.199067  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:39:40.199141  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:39:40.208739  400041 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:39:40.208814  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1030 18:39:40.217747  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1030 18:39:40.233431  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:39:40.249487  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1030 18:39:40.265703  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1030 18:39:40.282041  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:39:40.285892  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:39:40.297652  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:39:40.407338  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:39:40.424747  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.141
	I1030 18:39:40.424777  400041 certs.go:194] generating shared ca certs ...
	I1030 18:39:40.424817  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.425024  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:39:40.425082  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:39:40.425095  400041 certs.go:256] generating profile certs ...
	I1030 18:39:40.425172  400041 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:39:40.425193  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt with IP's: []
	I1030 18:39:40.472361  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt ...
	I1030 18:39:40.472390  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt: {Name:mkc5230ad33247edd4a8c72c6c48a87fa9cedd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472564  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key ...
	I1030 18:39:40.472575  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key: {Name:mk2476b29598bb2a9232a00c23240eb0f41fcc47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.472659  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0
	I1030 18:39:40.472675  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.254]
	I1030 18:39:40.623668  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 ...
	I1030 18:39:40.623703  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0: {Name:mk527af1a36a41edb105de0ac73afcba6a07951e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623865  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 ...
	I1030 18:39:40.623878  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0: {Name:mk9d3db1edca5a6647774a57300dfc12ee759cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.623943  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:39:40.624014  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.8a55aae0 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:39:40.624064  400041 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:39:40.624080  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt with IP's: []
	I1030 18:39:40.681800  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt ...
	I1030 18:39:40.681833  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt: {Name:mke6c9a4a487817027f382c9db962d8a5023b692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.681991  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key ...
	I1030 18:39:40.682001  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key: {Name:mkcef517ac3b25f9738ab0dc212031ff215f0337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:40.682069  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:39:40.682086  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:39:40.682097  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:39:40.682118  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:39:40.682131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:39:40.682142  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:39:40.682154  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:39:40.682166  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:39:40.682213  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:39:40.682246  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:39:40.682256  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:39:40.682279  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:39:40.682301  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:39:40.682325  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:39:40.682365  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:39:40.682398  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.682412  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:40.682432  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:39:40.683028  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:39:40.708651  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:39:40.731313  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:39:40.753734  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:39:40.776131  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 18:39:40.799436  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:39:40.822746  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:39:40.845786  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:39:40.869789  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:39:40.893594  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:39:40.916381  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:39:40.939683  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:39:40.956310  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:39:40.962024  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:39:40.972261  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976598  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.976650  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:39:40.982403  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:39:40.992755  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:39:41.003221  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007653  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.007709  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:39:41.013218  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:39:41.023594  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:39:41.033911  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038607  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.038673  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:39:41.044095  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:39:41.054143  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:39:41.058096  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:39:41.058161  400041 kubeadm.go:392] StartCluster: {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:39:41.058251  400041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:39:41.058301  400041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:39:41.095584  400041 cri.go:89] found id: ""
	I1030 18:39:41.095649  400041 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 18:39:41.105071  400041 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 18:39:41.114164  400041 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 18:39:41.122895  400041 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 18:39:41.122908  400041 kubeadm.go:157] found existing configuration files:
	
	I1030 18:39:41.122941  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 18:39:41.131529  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 18:39:41.131566  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 18:39:41.140275  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 18:39:41.148757  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 18:39:41.148813  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 18:39:41.160794  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.184302  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 18:39:41.184383  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 18:39:41.207263  400041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 18:39:41.228026  400041 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 18:39:41.228102  400041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 18:39:41.237111  400041 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 18:39:41.445375  400041 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 18:39:52.585541  400041 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 18:39:52.585616  400041 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 18:39:52.585710  400041 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 18:39:52.585832  400041 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 18:39:52.585956  400041 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 18:39:52.586025  400041 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 18:39:52.587620  400041 out.go:235]   - Generating certificates and keys ...
	I1030 18:39:52.587688  400041 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 18:39:52.587761  400041 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 18:39:52.587836  400041 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 18:39:52.587896  400041 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 18:39:52.587987  400041 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 18:39:52.588061  400041 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 18:39:52.588139  400041 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 18:39:52.588270  400041 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588347  400041 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 18:39:52.588511  400041 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174833 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1030 18:39:52.588616  400041 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 18:39:52.588707  400041 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 18:39:52.588773  400041 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 18:39:52.588839  400041 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 18:39:52.588887  400041 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 18:39:52.588932  400041 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 18:39:52.589004  400041 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 18:39:52.589094  400041 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 18:39:52.589146  400041 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 18:39:52.589229  400041 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 18:39:52.589332  400041 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 18:39:52.590758  400041 out.go:235]   - Booting up control plane ...
	I1030 18:39:52.590844  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 18:39:52.590916  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 18:39:52.590968  400041 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 18:39:52.591065  400041 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 18:39:52.591191  400041 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 18:39:52.591253  400041 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 18:39:52.591410  400041 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 18:39:52.591536  400041 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 18:39:52.591616  400041 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003124871s
	I1030 18:39:52.591709  400041 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 18:39:52.591794  400041 kubeadm.go:310] [api-check] The API server is healthy after 5.662047877s
	I1030 18:39:52.591944  400041 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 18:39:52.592125  400041 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 18:39:52.592192  400041 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 18:39:52.592401  400041 kubeadm.go:310] [mark-control-plane] Marking the node ha-174833 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 18:39:52.592456  400041 kubeadm.go:310] [bootstrap-token] Using token: g2rz2p.8nzvncljb4xmvqws
	I1030 18:39:52.593760  400041 out.go:235]   - Configuring RBAC rules ...
	I1030 18:39:52.593856  400041 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 18:39:52.593940  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 18:39:52.594118  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 18:39:52.594304  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 18:39:52.594473  400041 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 18:39:52.594624  400041 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 18:39:52.594785  400041 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 18:39:52.594849  400041 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 18:39:52.594921  400041 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 18:39:52.594940  400041 kubeadm.go:310] 
	I1030 18:39:52.594996  400041 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 18:39:52.595002  400041 kubeadm.go:310] 
	I1030 18:39:52.595066  400041 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 18:39:52.595072  400041 kubeadm.go:310] 
	I1030 18:39:52.595106  400041 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 18:39:52.595167  400041 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 18:39:52.595211  400041 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 18:39:52.595217  400041 kubeadm.go:310] 
	I1030 18:39:52.595262  400041 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 18:39:52.595268  400041 kubeadm.go:310] 
	I1030 18:39:52.595323  400041 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 18:39:52.595331  400041 kubeadm.go:310] 
	I1030 18:39:52.595374  400041 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 18:39:52.595436  400041 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 18:39:52.595501  400041 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 18:39:52.595508  400041 kubeadm.go:310] 
	I1030 18:39:52.595599  400041 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 18:39:52.595699  400041 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 18:39:52.595708  400041 kubeadm.go:310] 
	I1030 18:39:52.595831  400041 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.595945  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 18:39:52.595970  400041 kubeadm.go:310] 	--control-plane 
	I1030 18:39:52.595975  400041 kubeadm.go:310] 
	I1030 18:39:52.596043  400041 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 18:39:52.596049  400041 kubeadm.go:310] 
	I1030 18:39:52.596119  400041 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2rz2p.8nzvncljb4xmvqws \
	I1030 18:39:52.596231  400041 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 18:39:52.596243  400041 cni.go:84] Creating CNI manager for ""
	I1030 18:39:52.596250  400041 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 18:39:52.597696  400041 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1030 18:39:52.598955  400041 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 18:39:52.605469  400041 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1030 18:39:52.605483  400041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1030 18:39:52.624363  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 18:39:53.005173  400041 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.005262  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833 minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=true
	I1030 18:39:53.173403  400041 ops.go:34] apiserver oom_adj: -16
	I1030 18:39:53.173409  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:53.674475  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.173792  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:54.673541  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.174225  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.674171  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 18:39:55.765485  400041 kubeadm.go:1113] duration metric: took 2.760286908s to wait for elevateKubeSystemPrivileges
	I1030 18:39:55.765536  400041 kubeadm.go:394] duration metric: took 14.707379512s to StartCluster
	I1030 18:39:55.765560  400041 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.765652  400041 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.766341  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:39:55.766618  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 18:39:55.766613  400041 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:55.766643  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:39:55.766652  400041 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 18:39:55.766742  400041 addons.go:69] Setting storage-provisioner=true in profile "ha-174833"
	I1030 18:39:55.766762  400041 addons.go:234] Setting addon storage-provisioner=true in "ha-174833"
	I1030 18:39:55.766765  400041 addons.go:69] Setting default-storageclass=true in profile "ha-174833"
	I1030 18:39:55.766787  400041 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174833"
	I1030 18:39:55.766793  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.766837  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:55.767201  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767204  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.767229  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.767233  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.782451  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I1030 18:39:55.783028  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.783605  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.783632  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.783733  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I1030 18:39:55.784013  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.784063  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.784233  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.784551  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.784576  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.784948  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.785512  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.785543  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.786284  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:39:55.786639  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 18:39:55.787187  400041 cert_rotation.go:140] Starting client certificate rotation controller
	I1030 18:39:55.787507  400041 addons.go:234] Setting addon default-storageclass=true in "ha-174833"
	I1030 18:39:55.787549  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:39:55.787801  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.787828  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.801215  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I1030 18:39:55.801753  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.802347  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.802374  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.802582  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I1030 18:39:55.802754  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.802945  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.802995  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.803462  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.803485  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.803870  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.804468  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:55.804521  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:55.804806  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.807396  400041 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 18:39:55.808701  400041 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:55.808721  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 18:39:55.808736  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.812067  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812493  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.812517  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.812683  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.812860  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.813040  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.813181  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.820594  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I1030 18:39:55.821053  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:55.821596  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:55.821614  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:55.821907  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:55.822100  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:39:55.823784  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:39:55.824021  400041 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.824035  400041 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 18:39:55.824050  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:39:55.826783  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827199  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:39:55.827215  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:39:55.827366  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:39:55.827540  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:39:55.827698  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:39:55.827825  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:39:55.887739  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 18:39:55.976821  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 18:39:55.987770  400041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 18:39:56.358196  400041 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 18:39:56.358229  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358248  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358534  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358554  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358563  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.358570  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.358835  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.358837  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.358856  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.358917  400041 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 18:39:56.358934  400041 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 18:39:56.359097  400041 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1030 18:39:56.359111  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.359120  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.359128  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.431588  400041 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
	I1030 18:39:56.432175  400041 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1030 18:39:56.432191  400041 round_trippers.go:469] Request Headers:
	I1030 18:39:56.432198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:39:56.432202  400041 round_trippers.go:473]     Content-Type: application/json
	I1030 18:39:56.432205  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:39:56.436115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:39:56.436287  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.436303  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.436618  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.436664  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.436672  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.590846  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.590868  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591203  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591227  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.591236  400041 main.go:141] libmachine: Making call to close driver server
	I1030 18:39:56.591244  400041 main.go:141] libmachine: (ha-174833) Calling .Close
	I1030 18:39:56.591478  400041 main.go:141] libmachine: (ha-174833) DBG | Closing plugin on server side
	I1030 18:39:56.591507  400041 main.go:141] libmachine: Successfully made call to close driver server
	I1030 18:39:56.591514  400041 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 18:39:56.593000  400041 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1030 18:39:56.594031  400041 addons.go:510] duration metric: took 827.372801ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1030 18:39:56.594084  400041 start.go:246] waiting for cluster config update ...
	I1030 18:39:56.594100  400041 start.go:255] writing updated cluster config ...
	I1030 18:39:56.595822  400041 out.go:201] 
	I1030 18:39:56.597023  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:39:56.597115  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.598537  400041 out.go:177] * Starting "ha-174833-m02" control-plane node in "ha-174833" cluster
	I1030 18:39:56.599471  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:39:56.599502  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:39:56.599603  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:39:56.599621  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:39:56.599722  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:39:56.599927  400041 start.go:360] acquireMachinesLock for ha-174833-m02: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:39:56.599988  400041 start.go:364] duration metric: took 32.769µs to acquireMachinesLock for "ha-174833-m02"
	I1030 18:39:56.600025  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:39:56.600106  400041 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1030 18:39:56.601604  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:39:56.601698  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:39:56.601732  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:39:56.616291  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I1030 18:39:56.616777  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:39:56.617304  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:39:56.617323  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:39:56.617636  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:39:56.617791  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:39:56.617923  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:39:56.618073  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:39:56.618098  400041 client.go:168] LocalClient.Create starting
	I1030 18:39:56.618131  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:39:56.618179  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618201  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618275  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:39:56.618304  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:39:56.618320  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:39:56.618344  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:39:56.618355  400041 main.go:141] libmachine: (ha-174833-m02) Calling .PreCreateCheck
	I1030 18:39:56.618511  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:39:56.618831  400041 main.go:141] libmachine: Creating machine...
	I1030 18:39:56.618844  400041 main.go:141] libmachine: (ha-174833-m02) Calling .Create
	I1030 18:39:56.618962  400041 main.go:141] libmachine: (ha-174833-m02) Creating KVM machine...
	I1030 18:39:56.620046  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing default KVM network
	I1030 18:39:56.620129  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found existing private KVM network mk-ha-174833
	I1030 18:39:56.620269  400041 main.go:141] libmachine: (ha-174833-m02) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:56.620295  400041 main.go:141] libmachine: (ha-174833-m02) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:39:56.620361  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.620250  400406 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:56.620446  400041 main.go:141] libmachine: (ha-174833-m02) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:39:56.895932  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:56.895765  400406 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa...
	I1030 18:39:57.037260  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037116  400406 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk...
	I1030 18:39:57.037293  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing magic tar header
	I1030 18:39:57.037303  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Writing SSH key tar header
	I1030 18:39:57.037311  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:57.037233  400406 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 ...
	I1030 18:39:57.037321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02
	I1030 18:39:57.037404  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02 (perms=drwx------)
	I1030 18:39:57.037429  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:39:57.037440  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:39:57.037453  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:39:57.037469  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:39:57.037479  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:39:57.037487  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:39:57.037494  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Checking permissions on dir: /home
	I1030 18:39:57.037515  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Skipping /home - not owner
	I1030 18:39:57.037531  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:39:57.037546  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:39:57.037559  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:39:57.037569  400041 main.go:141] libmachine: (ha-174833-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:39:57.037577  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:57.038511  400041 main.go:141] libmachine: (ha-174833-m02) define libvirt domain using xml: 
	I1030 18:39:57.038531  400041 main.go:141] libmachine: (ha-174833-m02) <domain type='kvm'>
	I1030 18:39:57.038538  400041 main.go:141] libmachine: (ha-174833-m02)   <name>ha-174833-m02</name>
	I1030 18:39:57.038542  400041 main.go:141] libmachine: (ha-174833-m02)   <memory unit='MiB'>2200</memory>
	I1030 18:39:57.038549  400041 main.go:141] libmachine: (ha-174833-m02)   <vcpu>2</vcpu>
	I1030 18:39:57.038556  400041 main.go:141] libmachine: (ha-174833-m02)   <features>
	I1030 18:39:57.038563  400041 main.go:141] libmachine: (ha-174833-m02)     <acpi/>
	I1030 18:39:57.038569  400041 main.go:141] libmachine: (ha-174833-m02)     <apic/>
	I1030 18:39:57.038577  400041 main.go:141] libmachine: (ha-174833-m02)     <pae/>
	I1030 18:39:57.038587  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.038594  400041 main.go:141] libmachine: (ha-174833-m02)   </features>
	I1030 18:39:57.038601  400041 main.go:141] libmachine: (ha-174833-m02)   <cpu mode='host-passthrough'>
	I1030 18:39:57.038605  400041 main.go:141] libmachine: (ha-174833-m02)   
	I1030 18:39:57.038610  400041 main.go:141] libmachine: (ha-174833-m02)   </cpu>
	I1030 18:39:57.038636  400041 main.go:141] libmachine: (ha-174833-m02)   <os>
	I1030 18:39:57.038660  400041 main.go:141] libmachine: (ha-174833-m02)     <type>hvm</type>
	I1030 18:39:57.038683  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='cdrom'/>
	I1030 18:39:57.038700  400041 main.go:141] libmachine: (ha-174833-m02)     <boot dev='hd'/>
	I1030 18:39:57.038708  400041 main.go:141] libmachine: (ha-174833-m02)     <bootmenu enable='no'/>
	I1030 18:39:57.038712  400041 main.go:141] libmachine: (ha-174833-m02)   </os>
	I1030 18:39:57.038717  400041 main.go:141] libmachine: (ha-174833-m02)   <devices>
	I1030 18:39:57.038725  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='cdrom'>
	I1030 18:39:57.038734  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/boot2docker.iso'/>
	I1030 18:39:57.038744  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hdc' bus='scsi'/>
	I1030 18:39:57.038752  400041 main.go:141] libmachine: (ha-174833-m02)       <readonly/>
	I1030 18:39:57.038764  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038780  400041 main.go:141] libmachine: (ha-174833-m02)     <disk type='file' device='disk'>
	I1030 18:39:57.038790  400041 main.go:141] libmachine: (ha-174833-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:39:57.038805  400041 main.go:141] libmachine: (ha-174833-m02)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/ha-174833-m02.rawdisk'/>
	I1030 18:39:57.038815  400041 main.go:141] libmachine: (ha-174833-m02)       <target dev='hda' bus='virtio'/>
	I1030 18:39:57.038825  400041 main.go:141] libmachine: (ha-174833-m02)     </disk>
	I1030 18:39:57.038832  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038844  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='mk-ha-174833'/>
	I1030 18:39:57.038858  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038874  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038892  400041 main.go:141] libmachine: (ha-174833-m02)     <interface type='network'>
	I1030 18:39:57.038901  400041 main.go:141] libmachine: (ha-174833-m02)       <source network='default'/>
	I1030 18:39:57.038911  400041 main.go:141] libmachine: (ha-174833-m02)       <model type='virtio'/>
	I1030 18:39:57.038922  400041 main.go:141] libmachine: (ha-174833-m02)     </interface>
	I1030 18:39:57.038931  400041 main.go:141] libmachine: (ha-174833-m02)     <serial type='pty'>
	I1030 18:39:57.038937  400041 main.go:141] libmachine: (ha-174833-m02)       <target port='0'/>
	I1030 18:39:57.038943  400041 main.go:141] libmachine: (ha-174833-m02)     </serial>
	I1030 18:39:57.038948  400041 main.go:141] libmachine: (ha-174833-m02)     <console type='pty'>
	I1030 18:39:57.038955  400041 main.go:141] libmachine: (ha-174833-m02)       <target type='serial' port='0'/>
	I1030 18:39:57.038981  400041 main.go:141] libmachine: (ha-174833-m02)     </console>
	I1030 18:39:57.039004  400041 main.go:141] libmachine: (ha-174833-m02)     <rng model='virtio'>
	I1030 18:39:57.039017  400041 main.go:141] libmachine: (ha-174833-m02)       <backend model='random'>/dev/random</backend>
	I1030 18:39:57.039026  400041 main.go:141] libmachine: (ha-174833-m02)     </rng>
	I1030 18:39:57.039033  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039042  400041 main.go:141] libmachine: (ha-174833-m02)     
	I1030 18:39:57.039050  400041 main.go:141] libmachine: (ha-174833-m02)   </devices>
	I1030 18:39:57.039059  400041 main.go:141] libmachine: (ha-174833-m02) </domain>
	I1030 18:39:57.039073  400041 main.go:141] libmachine: (ha-174833-m02) 
	I1030 18:39:57.045751  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:a3:4c:dc in network default
	I1030 18:39:57.046326  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring networks are active...
	I1030 18:39:57.046349  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:57.047038  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network default is active
	I1030 18:39:57.047398  400041 main.go:141] libmachine: (ha-174833-m02) Ensuring network mk-ha-174833 is active
	I1030 18:39:57.047750  400041 main.go:141] libmachine: (ha-174833-m02) Getting domain xml...
	I1030 18:39:57.048296  400041 main.go:141] libmachine: (ha-174833-m02) Creating domain...
	I1030 18:39:58.272260  400041 main.go:141] libmachine: (ha-174833-m02) Waiting to get IP...
	I1030 18:39:58.273021  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.273425  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.273496  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.273425  400406 retry.go:31] will retry after 283.659874ms: waiting for machine to come up
	I1030 18:39:58.559077  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.559572  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.559595  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.559530  400406 retry.go:31] will retry after 285.421922ms: waiting for machine to come up
	I1030 18:39:58.847321  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:58.847766  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:58.847795  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:58.847719  400406 retry.go:31] will retry after 459.416019ms: waiting for machine to come up
	I1030 18:39:59.308465  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.308944  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.309003  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.308931  400406 retry.go:31] will retry after 572.494843ms: waiting for machine to come up
	I1030 18:39:59.882664  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:39:59.883063  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:39:59.883097  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:39:59.883044  400406 retry.go:31] will retry after 513.18543ms: waiting for machine to come up
	I1030 18:40:00.397389  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:00.397747  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:00.397783  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:00.397729  400406 retry.go:31] will retry after 755.433082ms: waiting for machine to come up
	I1030 18:40:01.155395  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:01.155948  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:01.155979  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:01.155903  400406 retry.go:31] will retry after 1.038364995s: waiting for machine to come up
	I1030 18:40:02.195482  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:02.195950  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:02.195980  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:02.195911  400406 retry.go:31] will retry after 1.004508468s: waiting for machine to come up
	I1030 18:40:03.201825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:03.202261  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:03.202291  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:03.202205  400406 retry.go:31] will retry after 1.786384374s: waiting for machine to come up
	I1030 18:40:04.989943  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:04.990350  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:04.990371  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:04.990297  400406 retry.go:31] will retry after 1.895963981s: waiting for machine to come up
	I1030 18:40:06.888049  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:06.888464  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:06.888488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:06.888417  400406 retry.go:31] will retry after 1.948037202s: waiting for machine to come up
	I1030 18:40:08.839488  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:08.839847  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:08.839869  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:08.839824  400406 retry.go:31] will retry after 3.202281785s: waiting for machine to come up
	I1030 18:40:12.043324  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:12.043675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:12.043695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:12.043618  400406 retry.go:31] will retry after 3.877667252s: waiting for machine to come up
	I1030 18:40:15.924012  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:15.924431  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find current IP address of domain ha-174833-m02 in network mk-ha-174833
	I1030 18:40:15.924456  400041 main.go:141] libmachine: (ha-174833-m02) DBG | I1030 18:40:15.924364  400406 retry.go:31] will retry after 3.471906375s: waiting for machine to come up
	I1030 18:40:19.399252  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399675  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has current primary IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.399693  400041 main.go:141] libmachine: (ha-174833-m02) Found IP for machine: 192.168.39.67
	I1030 18:40:19.399744  400041 main.go:141] libmachine: (ha-174833-m02) Reserving static IP address...
	I1030 18:40:19.400103  400041 main.go:141] libmachine: (ha-174833-m02) DBG | unable to find host DHCP lease matching {name: "ha-174833-m02", mac: "52:54:00:87:fa:1a", ip: "192.168.39.67"} in network mk-ha-174833
	I1030 18:40:19.473268  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Getting to WaitForSSH function...
	I1030 18:40:19.473299  400041 main.go:141] libmachine: (ha-174833-m02) Reserved static IP address: 192.168.39.67
	I1030 18:40:19.473352  400041 main.go:141] libmachine: (ha-174833-m02) Waiting for SSH to be available...
	I1030 18:40:19.476054  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476545  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.476573  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.476733  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH client type: external
	I1030 18:40:19.476781  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa (-rw-------)
	I1030 18:40:19.476820  400041 main.go:141] libmachine: (ha-174833-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:40:19.476836  400041 main.go:141] libmachine: (ha-174833-m02) DBG | About to run SSH command:
	I1030 18:40:19.476843  400041 main.go:141] libmachine: (ha-174833-m02) DBG | exit 0
	I1030 18:40:19.602200  400041 main.go:141] libmachine: (ha-174833-m02) DBG | SSH cmd err, output: <nil>: 
	I1030 18:40:19.602526  400041 main.go:141] libmachine: (ha-174833-m02) KVM machine creation complete!
	I1030 18:40:19.602867  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:19.603528  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603721  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:19.603921  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:40:19.603937  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetState
	I1030 18:40:19.605043  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:40:19.605054  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:40:19.605059  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:40:19.605064  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.607164  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607533  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.607561  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.607643  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.607921  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608107  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.608292  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.608458  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.608704  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.608730  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:40:19.709697  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:19.709726  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:40:19.709734  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.712480  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.712863  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.712908  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.713089  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.713318  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.713620  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.713800  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.714020  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.714034  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:40:19.823287  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:40:19.823400  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:40:19.823413  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:40:19.823424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823703  400041 buildroot.go:166] provisioning hostname "ha-174833-m02"
	I1030 18:40:19.823731  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:19.823950  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.826635  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827060  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.827086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.827137  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.827303  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.827602  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.827740  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.827922  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.827936  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m02 && echo "ha-174833-m02" | sudo tee /etc/hostname
	I1030 18:40:19.945348  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m02
	
	I1030 18:40:19.945376  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:19.948392  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948756  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:19.948806  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:19.948936  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:19.949124  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949286  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:19.949399  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:19.949565  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:19.949742  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:19.949759  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:40:20.059828  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:40:20.059870  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:40:20.059905  400041 buildroot.go:174] setting up certificates
	I1030 18:40:20.059915  400041 provision.go:84] configureAuth start
	I1030 18:40:20.059930  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetMachineName
	I1030 18:40:20.060203  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.062825  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063237  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.063262  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.063417  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.065380  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065695  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.065725  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.065881  400041 provision.go:143] copyHostCerts
	I1030 18:40:20.065925  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066007  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:40:20.066020  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:40:20.066101  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:40:20.066211  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066236  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:40:20.066244  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:40:20.066288  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:40:20.066357  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066380  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:40:20.066386  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:40:20.066420  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:40:20.066508  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m02 san=[127.0.0.1 192.168.39.67 ha-174833-m02 localhost minikube]
	I1030 18:40:20.314819  400041 provision.go:177] copyRemoteCerts
	I1030 18:40:20.314902  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:40:20.314940  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.317541  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.317873  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.317916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.318094  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.318304  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.318547  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.318726  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.405714  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:40:20.405820  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:40:20.431726  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:40:20.431798  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:40:20.455138  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:40:20.455222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 18:40:20.477773  400041 provision.go:87] duration metric: took 417.842724ms to configureAuth
	I1030 18:40:20.477806  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:40:20.478018  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:20.478120  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.480885  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481224  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.481250  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.481424  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.481637  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481775  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.481966  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.482148  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.482322  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.482338  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:40:20.706339  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:40:20.706375  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:40:20.706387  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetURL
	I1030 18:40:20.707589  400041 main.go:141] libmachine: (ha-174833-m02) DBG | Using libvirt version 6000000
	I1030 18:40:20.709597  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.709934  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.709964  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.710106  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:40:20.710135  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:40:20.710147  400041 client.go:171] duration metric: took 24.092036555s to LocalClient.Create
	I1030 18:40:20.710176  400041 start.go:167] duration metric: took 24.092106335s to libmachine.API.Create "ha-174833"
	I1030 18:40:20.710186  400041 start.go:293] postStartSetup for "ha-174833-m02" (driver="kvm2")
	I1030 18:40:20.710195  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:40:20.710231  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.710468  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:40:20.710503  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.712432  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712689  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.712717  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.712824  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.713017  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.713185  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.713308  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.793164  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:40:20.797557  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:40:20.797583  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:40:20.797648  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:40:20.797720  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:40:20.797730  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:40:20.797807  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:40:20.807375  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:20.830866  400041 start.go:296] duration metric: took 120.664021ms for postStartSetup
	I1030 18:40:20.830929  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetConfigRaw
	I1030 18:40:20.831701  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.834714  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835086  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.835116  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.835438  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:40:20.835668  400041 start.go:128] duration metric: took 24.235548343s to createHost
	I1030 18:40:20.835700  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.837613  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.837888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.837916  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.838041  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.838176  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838317  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.838450  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.838592  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:40:20.838755  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1030 18:40:20.838765  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:40:20.939393  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313620.914818123
	
	I1030 18:40:20.939419  400041 fix.go:216] guest clock: 1730313620.914818123
	I1030 18:40:20.939430  400041 fix.go:229] Guest: 2024-10-30 18:40:20.914818123 +0000 UTC Remote: 2024-10-30 18:40:20.835684734 +0000 UTC m=+67.590472244 (delta=79.133389ms)
	I1030 18:40:20.939453  400041 fix.go:200] guest clock delta is within tolerance: 79.133389ms
	I1030 18:40:20.939460  400041 start.go:83] releasing machines lock for "ha-174833-m02", held for 24.339459492s
	I1030 18:40:20.939487  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.939802  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:20.942445  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.942801  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.942827  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.945268  400041 out.go:177] * Found network options:
	I1030 18:40:20.946721  400041 out.go:177]   - NO_PROXY=192.168.39.141
	W1030 18:40:20.947877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.947925  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948482  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948657  400041 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:40:20.948763  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:40:20.948808  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	W1030 18:40:20.948877  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:40:20.948974  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:40:20.948998  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:40:20.951510  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951591  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951860  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951890  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.951915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:20.951926  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:20.952047  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952193  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:40:20.952262  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952409  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:40:20.952476  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952535  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:40:20.952595  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:20.952723  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:40:21.182304  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:40:21.188738  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:40:21.188808  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:40:21.205984  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:40:21.206007  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:40:21.206074  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:40:21.221839  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:40:21.235753  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:40:21.235807  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:40:21.249998  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:40:21.263401  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:40:21.372667  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:40:21.535477  400041 docker.go:233] disabling docker service ...
	I1030 18:40:21.535567  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:40:21.549384  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:40:21.561708  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:40:21.680746  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:40:21.800498  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:40:21.815096  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:40:21.833550  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:40:21.833622  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.843823  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:40:21.843902  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.854106  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.864400  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.874387  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:40:21.884560  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.895371  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.913811  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:40:21.924236  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:40:21.933153  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:40:21.933202  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:40:21.946248  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:40:21.955404  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:22.069005  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:40:22.157442  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:40:22.157509  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:40:22.162047  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:40:22.162100  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:40:22.165636  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:40:22.205156  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:40:22.205267  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.231913  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:40:22.261339  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:40:22.262679  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:40:22.263832  400041 main.go:141] libmachine: (ha-174833-m02) Calling .GetIP
	I1030 18:40:22.266556  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.266888  400041 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:40:11 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:40:22.266915  400041 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:40:22.267123  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:40:22.271259  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:22.283359  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:40:22.283542  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:22.283792  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.283835  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.298878  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1030 18:40:22.299305  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.299796  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.299822  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.300116  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.300325  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:40:22.301824  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:22.302129  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:22.302167  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:22.316968  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I1030 18:40:22.317445  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:22.317883  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:22.317906  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:22.318227  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:22.318396  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:22.318552  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.67
	I1030 18:40:22.318566  400041 certs.go:194] generating shared ca certs ...
	I1030 18:40:22.318581  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.318722  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:40:22.318763  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:40:22.318772  400041 certs.go:256] generating profile certs ...
	I1030 18:40:22.318861  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:40:22.318884  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801
	I1030 18:40:22.318898  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.254]
	I1030 18:40:22.389619  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 ...
	I1030 18:40:22.389649  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801: {Name:mk69c03eb6b5f0b4d0acc4a4891d260deacb4aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389835  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 ...
	I1030 18:40:22.389853  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801: {Name:mkc4587720139321b37dc723905edfa912a066e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:40:22.389954  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:40:22.390078  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.21314801 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:40:22.390209  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:40:22.390226  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:40:22.390240  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:40:22.390253  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:40:22.390265  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:40:22.390276  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:40:22.390291  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:40:22.390303  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:40:22.390314  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:40:22.390363  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:40:22.390392  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:40:22.390401  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:40:22.390423  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:40:22.390447  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:40:22.390467  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:40:22.390526  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:40:22.390551  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:22.390567  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.390579  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.390609  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:22.393533  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.393916  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:22.393937  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:22.394139  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:22.394328  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:22.394468  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:22.394599  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:22.466820  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:40:22.472172  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:40:22.483413  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:40:22.487802  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:40:22.498142  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:40:22.502005  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:40:22.511789  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:40:22.516194  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:40:22.526092  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:40:22.530300  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:40:22.539761  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:40:22.543659  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:40:22.554032  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:40:22.579429  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:40:22.603366  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:40:22.627011  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:40:22.649824  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1030 18:40:22.675859  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 18:40:22.702878  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:40:22.729191  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:40:22.755783  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:40:22.781937  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:40:22.806557  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:40:22.829559  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:40:22.845492  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:40:22.861140  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:40:22.877798  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:40:22.894364  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:40:22.910766  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:40:22.926975  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:40:22.944058  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:40:22.949888  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:40:22.960383  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964756  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.964810  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:40:22.970419  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:40:22.980880  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:40:22.991033  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995374  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:40:22.995440  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:40:23.000879  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:40:23.011335  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:40:23.021800  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026327  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.026385  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:40:23.032188  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:40:23.042278  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:40:23.046274  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:40:23.046324  400041 kubeadm.go:934] updating node {m02 192.168.39.67 8443 v1.31.2 crio true true} ...
	I1030 18:40:23.046424  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:40:23.046460  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:40:23.046517  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:40:23.063163  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:40:23.063236  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:40:23.063297  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.072465  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:40:23.072510  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:40:23.081550  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:40:23.081576  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.081589  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1030 18:40:23.081602  400041 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1030 18:40:23.081619  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:40:23.085961  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:40:23.085992  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:40:24.328288  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.328373  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:40:24.333326  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:40:24.333359  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:40:24.830276  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:40:24.845774  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.845893  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:40:24.850314  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:40:24.850355  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:40:25.162230  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:40:25.172064  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1030 18:40:25.188645  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:40:25.204815  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:40:25.221977  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:40:25.225934  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:40:25.237891  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:25.349561  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:25.366698  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:40:25.367180  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:40:25.367246  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:40:25.384828  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I1030 18:40:25.385432  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:40:25.386031  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:40:25.386061  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:40:25.386434  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:40:25.386621  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:40:25.386806  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:40:25.386959  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:40:25.386986  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:40:25.389976  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390481  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:40:25.390522  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:40:25.390674  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:40:25.390889  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:40:25.391033  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:40:25.391170  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:40:25.547459  400041 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:25.547519  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443"
	I1030 18:40:46.568187  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t38xof.e3m90xf7qkzka9te --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443": (21.020635274s)
	I1030 18:40:46.568229  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:40:47.028345  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m02 minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:40:47.150726  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:40:47.264922  400041 start.go:319] duration metric: took 21.878113098s to joinCluster
	I1030 18:40:47.265016  400041 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:40:47.265346  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:40:47.267451  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:40:47.268676  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:40:47.482830  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:40:47.498911  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:40:47.499271  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:40:47.499361  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:40:47.499634  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:40:47.499754  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:47.499765  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:47.499776  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:47.499780  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:47.513589  400041 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1030 18:40:48.000627  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.000717  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.000732  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.000739  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.005027  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:48.500527  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.500553  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.500562  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.500566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:48.507486  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:40:48.999957  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:48.999981  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:48.999992  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:48.999998  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.004072  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:49.500009  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:49.500034  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:49.500044  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:49.500049  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:49.503688  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:49.504299  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:50.000762  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.000787  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.000798  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.000804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.004710  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.500222  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.500249  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.500261  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.500268  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:50.503800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:50.999915  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:50.999941  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:50.999949  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:50.999953  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.003089  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:51.500241  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:51.500270  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:51.500282  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:51.500288  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:51.503181  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:52.000665  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.000687  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.000696  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.000701  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.004020  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:52.004537  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:52.500784  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:52.500807  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:52.500815  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:52.500820  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:52.503534  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:53.000339  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.000361  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.000372  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.000377  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.003704  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:53.500343  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:53.500365  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:53.500373  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:53.500378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:53.503510  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.000354  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.000381  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.000395  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.000403  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.004115  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:54.004763  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:54.500127  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:54.500152  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:54.500161  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:54.500166  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:54.503778  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.000747  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.000778  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.000791  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.000797  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.004570  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:55.500357  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:55.500405  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:55.500415  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:55.500420  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:55.504113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:56.000848  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.000872  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.000890  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.000895  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.005204  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:40:56.006300  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:56.500116  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:56.500139  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:56.500149  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:56.500156  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:56.503736  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.000020  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.000047  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.000059  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.000064  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.003517  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.500475  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.500507  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.500519  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.500528  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:57.504454  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:57.999844  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:57.999871  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:57.999880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:57.999884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.003233  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:58.500239  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:58.500265  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:58.500275  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:58.500280  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:58.503241  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:40:58.504056  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:40:59.000302  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.000325  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.000335  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.000338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.003378  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.500257  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.500293  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.500305  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.500311  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:40:59.503678  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:40:59.999943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:40:59.999974  400041 round_trippers.go:469] Request Headers:
	I1030 18:40:59.999984  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:40:59.999988  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.003694  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.499870  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:00.499894  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:00.499903  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:00.499906  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:00.503912  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:00.504852  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:01.000256  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.000287  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.000303  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.000310  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.004687  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:01.500249  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:01.500275  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:01.500286  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:01.500292  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:01.503725  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.000125  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.000149  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.000159  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.000163  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.003110  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:02.500738  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:02.500764  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:02.500774  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:02.500779  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:02.504318  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:02.504919  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:03.000323  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.000348  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.000361  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.000369  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.003869  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:03.500542  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:03.500568  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:03.500579  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:03.500585  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:03.503602  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:04.000594  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.000622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.000633  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.000639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.003714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.500712  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.500736  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.500746  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.500752  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:04.503791  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:04.999910  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:04.999934  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:04.999943  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:04.999948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.003533  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:05.004088  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:05.500597  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:05.500622  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:05.500630  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:05.500639  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:05.503501  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:06.000616  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.000647  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.000659  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.000667  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.004719  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:06.500833  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:06.500855  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:06.500864  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:06.500868  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:06.504070  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.000429  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.000469  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.000481  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.000487  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.003689  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:07.004389  400041 node_ready.go:53] node "ha-174833-m02" has status "Ready":"False"
	I1030 18:41:07.500634  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:07.500659  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:07.500670  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:07.500676  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:07.503714  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.000797  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.000823  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.000835  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.000839  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.004162  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.500552  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.500576  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.500584  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.500588  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.503781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.504368  400041 node_ready.go:49] node "ha-174833-m02" has status "Ready":"True"
	I1030 18:41:08.504387  400041 node_ready.go:38] duration metric: took 21.004733688s for node "ha-174833-m02" to be "Ready" ...
	I1030 18:41:08.504399  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:08.504510  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:08.504522  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.504533  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.504540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.508519  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.514243  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.514348  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:41:08.514359  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.514370  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.514375  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.517179  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.518000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.518014  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.518021  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.518026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.520277  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.520732  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.520749  400041 pod_ready.go:82] duration metric: took 6.484522ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520758  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.520818  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:41:08.520826  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.520832  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.520837  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.523187  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.523748  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.523763  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.523770  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.523773  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.525598  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.526045  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.526061  400041 pod_ready.go:82] duration metric: took 5.296844ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526073  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.526128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:41:08.526137  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.526147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.526155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.528137  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.528632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.528646  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.528653  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.528656  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.530536  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.530970  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.530985  400041 pod_ready.go:82] duration metric: took 4.904104ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.530995  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.531044  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:41:08.531054  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.531063  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.531071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.532895  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.533572  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:08.533585  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.533592  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.533598  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.535476  400041 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 18:41:08.535920  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.535936  400041 pod_ready.go:82] duration metric: took 4.934707ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.535947  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.701353  400041 request.go:632] Waited for 165.322436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701427  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:41:08.701434  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.701445  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.701455  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.704722  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:08.900709  400041 request.go:632] Waited for 195.283762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900771  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:08.900777  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:08.900787  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:08.900793  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:08.903675  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:08.904204  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:08.904224  400041 pod_ready.go:82] duration metric: took 368.270404ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:08.904235  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.101325  400041 request.go:632] Waited for 196.99596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101392  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:41:09.101397  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.101406  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.101414  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.104943  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.301209  400041 request.go:632] Waited for 195.378832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301280  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:09.301286  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.301294  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.301299  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.304703  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.305150  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.305171  400041 pod_ready.go:82] duration metric: took 400.929601ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.305183  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.501368  400041 request.go:632] Waited for 196.079315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501455  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:41:09.501468  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.501478  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.501486  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.505228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:09.701240  400041 request.go:632] Waited for 195.369784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:09.701322  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.701331  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.701334  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.703994  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:09.704752  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:09.704770  400041 pod_ready.go:82] duration metric: took 399.581191ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.704781  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:09.900901  400041 request.go:632] Waited for 196.026591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900964  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:41:09.900969  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:09.900978  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:09.900983  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:09.904074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.101112  400041 request.go:632] Waited for 196.368613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101194  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.101205  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.101214  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.101226  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.104324  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.104744  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.104763  400041 pod_ready.go:82] duration metric: took 399.976925ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.104774  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.300860  400041 request.go:632] Waited for 196.007769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300943  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:41:10.300949  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.300957  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.300968  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.304042  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.501291  400041 request.go:632] Waited for 196.406771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501358  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:10.501363  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.501372  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.501378  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.504471  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.504946  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.504966  400041 pod_ready.go:82] duration metric: took 400.186291ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.504985  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.701128  400041 request.go:632] Waited for 196.042962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701198  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:41:10.701203  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.701211  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.701218  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.704595  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.900756  400041 request.go:632] Waited for 195.290492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900855  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:10.900861  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:10.900869  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:10.900878  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:10.904332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:10.904829  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:10.904849  400041 pod_ready.go:82] duration metric: took 399.858433ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:10.904860  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.101047  400041 request.go:632] Waited for 196.091867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101112  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:41:11.101117  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.101125  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.101130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.104800  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.300654  400041 request.go:632] Waited for 195.298322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300720  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:41:11.300731  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.300740  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.300743  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.304342  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.304796  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.304815  400041 pod_ready.go:82] duration metric: took 399.947891ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.304826  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.500975  400041 request.go:632] Waited for 196.04993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501040  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:41:11.501045  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.501052  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.501057  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.504438  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:11.701379  400041 request.go:632] Waited for 196.340488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701443  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:41:11.701449  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.701457  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.701462  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.704386  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:41:11.704831  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:41:11.704850  400041 pod_ready.go:82] duration metric: took 400.015715ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:41:11.704863  400041 pod_ready.go:39] duration metric: took 3.200450336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:41:11.704882  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:41:11.704944  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:41:11.723542  400041 api_server.go:72] duration metric: took 24.458488953s to wait for apiserver process to appear ...
	I1030 18:41:11.723564  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:41:11.723583  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:41:11.729129  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:41:11.729191  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:41:11.729199  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.729206  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.729213  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.729902  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:41:11.729987  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:41:11.730004  400041 api_server.go:131] duration metric: took 6.434971ms to wait for apiserver health ...
	I1030 18:41:11.730015  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:41:11.901454  400041 request.go:632] Waited for 171.341792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901536  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:11.901542  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:11.901550  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:11.901554  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:11.906457  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:11.911360  400041 system_pods.go:59] 17 kube-system pods found
	I1030 18:41:11.911389  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:11.911396  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:11.911402  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:11.911408  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:11.911413  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:11.911418  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:11.911424  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:11.911432  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:11.911437  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:11.911440  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:11.911444  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:11.911447  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:11.911452  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:11.911458  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:11.911461  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:11.911464  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:11.911467  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:11.911474  400041 system_pods.go:74] duration metric: took 181.449525ms to wait for pod list to return data ...
	I1030 18:41:11.911484  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:41:12.100968  400041 request.go:632] Waited for 189.365167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:41:12.101038  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.101046  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.101054  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.104878  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:41:12.105115  400041 default_sa.go:45] found service account: "default"
	I1030 18:41:12.105131  400041 default_sa.go:55] duration metric: took 193.641266ms for default service account to be created ...
	I1030 18:41:12.105141  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:41:12.301355  400041 request.go:632] Waited for 196.109942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301420  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:41:12.301425  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.301433  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.301438  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.306382  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.311406  400041 system_pods.go:86] 17 kube-system pods found
	I1030 18:41:12.311437  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:41:12.311446  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:41:12.311454  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:41:12.311460  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:41:12.311465  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:41:12.311471  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:41:12.311477  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:41:12.311486  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:41:12.311492  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:41:12.311502  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:41:12.311509  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:41:12.311517  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:41:12.311525  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:41:12.311531  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:41:12.311540  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:41:12.311546  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:41:12.311554  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:41:12.311563  400041 system_pods.go:126] duration metric: took 206.414957ms to wait for k8s-apps to be running ...
	I1030 18:41:12.311574  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:41:12.311636  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:12.327021  400041 system_svc.go:56] duration metric: took 15.42192ms WaitForService to wait for kubelet
	I1030 18:41:12.327057  400041 kubeadm.go:582] duration metric: took 25.062007913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:41:12.327076  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:41:12.501567  400041 request.go:632] Waited for 174.380598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501632  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:41:12.501638  400041 round_trippers.go:469] Request Headers:
	I1030 18:41:12.501647  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:41:12.501651  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:41:12.505969  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:41:12.506702  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506731  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506744  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:41:12.506747  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:41:12.506751  400041 node_conditions.go:105] duration metric: took 179.67107ms to run NodePressure ...
	I1030 18:41:12.506763  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:41:12.506788  400041 start.go:255] writing updated cluster config ...
	I1030 18:41:12.509015  400041 out.go:201] 
	I1030 18:41:12.510595  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:12.510702  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.512413  400041 out.go:177] * Starting "ha-174833-m03" control-plane node in "ha-174833" cluster
	I1030 18:41:12.513538  400041 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:41:12.513560  400041 cache.go:56] Caching tarball of preloaded images
	I1030 18:41:12.513661  400041 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:41:12.513676  400041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:41:12.513774  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:12.513991  400041 start.go:360] acquireMachinesLock for ha-174833-m03: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:41:12.514046  400041 start.go:364] duration metric: took 32.901µs to acquireMachinesLock for "ha-174833-m03"
	I1030 18:41:12.514072  400041 start.go:93] Provisioning new machine with config: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:12.514208  400041 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1030 18:41:12.515720  400041 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 18:41:12.515810  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:12.515845  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:12.531298  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I1030 18:41:12.531779  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:12.532302  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:12.532328  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:12.532695  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:12.532932  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:12.533094  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:12.533248  400041 start.go:159] libmachine.API.Create for "ha-174833" (driver="kvm2")
	I1030 18:41:12.533281  400041 client.go:168] LocalClient.Create starting
	I1030 18:41:12.533344  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 18:41:12.533389  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533410  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533483  400041 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 18:41:12.533512  400041 main.go:141] libmachine: Decoding PEM data...
	I1030 18:41:12.533529  400041 main.go:141] libmachine: Parsing certificate...
	I1030 18:41:12.533556  400041 main.go:141] libmachine: Running pre-create checks...
	I1030 18:41:12.533582  400041 main.go:141] libmachine: (ha-174833-m03) Calling .PreCreateCheck
	I1030 18:41:12.533754  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:12.534141  400041 main.go:141] libmachine: Creating machine...
	I1030 18:41:12.534155  400041 main.go:141] libmachine: (ha-174833-m03) Calling .Create
	I1030 18:41:12.534316  400041 main.go:141] libmachine: (ha-174833-m03) Creating KVM machine...
	I1030 18:41:12.535469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing default KVM network
	I1030 18:41:12.535689  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found existing private KVM network mk-ha-174833
	I1030 18:41:12.535839  400041 main.go:141] libmachine: (ha-174833-m03) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.535890  400041 main.go:141] libmachine: (ha-174833-m03) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:41:12.535946  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.535806  400817 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.536022  400041 main.go:141] libmachine: (ha-174833-m03) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 18:41:12.821754  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.821614  400817 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa...
	I1030 18:41:12.940970  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940841  400817 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk...
	I1030 18:41:12.941002  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing magic tar header
	I1030 18:41:12.941016  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Writing SSH key tar header
	I1030 18:41:12.941027  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:12.940965  400817 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 ...
	I1030 18:41:12.941045  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03
	I1030 18:41:12.941128  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03 (perms=drwx------)
	I1030 18:41:12.941149  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 18:41:12.941160  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 18:41:12.941183  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 18:41:12.941197  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 18:41:12.941212  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 18:41:12.941227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:41:12.941239  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 18:41:12.941248  400041 main.go:141] libmachine: (ha-174833-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 18:41:12.941259  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:12.941276  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 18:41:12.941291  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home/jenkins
	I1030 18:41:12.941301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Checking permissions on dir: /home
	I1030 18:41:12.941315  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Skipping /home - not owner
	I1030 18:41:12.942234  400041 main.go:141] libmachine: (ha-174833-m03) define libvirt domain using xml: 
	I1030 18:41:12.942260  400041 main.go:141] libmachine: (ha-174833-m03) <domain type='kvm'>
	I1030 18:41:12.942270  400041 main.go:141] libmachine: (ha-174833-m03)   <name>ha-174833-m03</name>
	I1030 18:41:12.942277  400041 main.go:141] libmachine: (ha-174833-m03)   <memory unit='MiB'>2200</memory>
	I1030 18:41:12.942286  400041 main.go:141] libmachine: (ha-174833-m03)   <vcpu>2</vcpu>
	I1030 18:41:12.942296  400041 main.go:141] libmachine: (ha-174833-m03)   <features>
	I1030 18:41:12.942305  400041 main.go:141] libmachine: (ha-174833-m03)     <acpi/>
	I1030 18:41:12.942315  400041 main.go:141] libmachine: (ha-174833-m03)     <apic/>
	I1030 18:41:12.942326  400041 main.go:141] libmachine: (ha-174833-m03)     <pae/>
	I1030 18:41:12.942335  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942346  400041 main.go:141] libmachine: (ha-174833-m03)   </features>
	I1030 18:41:12.942353  400041 main.go:141] libmachine: (ha-174833-m03)   <cpu mode='host-passthrough'>
	I1030 18:41:12.942387  400041 main.go:141] libmachine: (ha-174833-m03)   
	I1030 18:41:12.942411  400041 main.go:141] libmachine: (ha-174833-m03)   </cpu>
	I1030 18:41:12.942424  400041 main.go:141] libmachine: (ha-174833-m03)   <os>
	I1030 18:41:12.942433  400041 main.go:141] libmachine: (ha-174833-m03)     <type>hvm</type>
	I1030 18:41:12.942446  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='cdrom'/>
	I1030 18:41:12.942456  400041 main.go:141] libmachine: (ha-174833-m03)     <boot dev='hd'/>
	I1030 18:41:12.942469  400041 main.go:141] libmachine: (ha-174833-m03)     <bootmenu enable='no'/>
	I1030 18:41:12.942502  400041 main.go:141] libmachine: (ha-174833-m03)   </os>
	I1030 18:41:12.942521  400041 main.go:141] libmachine: (ha-174833-m03)   <devices>
	I1030 18:41:12.942532  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='cdrom'>
	I1030 18:41:12.942543  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/boot2docker.iso'/>
	I1030 18:41:12.942552  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hdc' bus='scsi'/>
	I1030 18:41:12.942561  400041 main.go:141] libmachine: (ha-174833-m03)       <readonly/>
	I1030 18:41:12.942566  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942574  400041 main.go:141] libmachine: (ha-174833-m03)     <disk type='file' device='disk'>
	I1030 18:41:12.942581  400041 main.go:141] libmachine: (ha-174833-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 18:41:12.942587  400041 main.go:141] libmachine: (ha-174833-m03)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/ha-174833-m03.rawdisk'/>
	I1030 18:41:12.942606  400041 main.go:141] libmachine: (ha-174833-m03)       <target dev='hda' bus='virtio'/>
	I1030 18:41:12.942619  400041 main.go:141] libmachine: (ha-174833-m03)     </disk>
	I1030 18:41:12.942627  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942635  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='mk-ha-174833'/>
	I1030 18:41:12.942648  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942658  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942670  400041 main.go:141] libmachine: (ha-174833-m03)     <interface type='network'>
	I1030 18:41:12.942697  400041 main.go:141] libmachine: (ha-174833-m03)       <source network='default'/>
	I1030 18:41:12.942736  400041 main.go:141] libmachine: (ha-174833-m03)       <model type='virtio'/>
	I1030 18:41:12.942764  400041 main.go:141] libmachine: (ha-174833-m03)     </interface>
	I1030 18:41:12.942779  400041 main.go:141] libmachine: (ha-174833-m03)     <serial type='pty'>
	I1030 18:41:12.942790  400041 main.go:141] libmachine: (ha-174833-m03)       <target port='0'/>
	I1030 18:41:12.942802  400041 main.go:141] libmachine: (ha-174833-m03)     </serial>
	I1030 18:41:12.942812  400041 main.go:141] libmachine: (ha-174833-m03)     <console type='pty'>
	I1030 18:41:12.942823  400041 main.go:141] libmachine: (ha-174833-m03)       <target type='serial' port='0'/>
	I1030 18:41:12.942832  400041 main.go:141] libmachine: (ha-174833-m03)     </console>
	I1030 18:41:12.942841  400041 main.go:141] libmachine: (ha-174833-m03)     <rng model='virtio'>
	I1030 18:41:12.942852  400041 main.go:141] libmachine: (ha-174833-m03)       <backend model='random'>/dev/random</backend>
	I1030 18:41:12.942885  400041 main.go:141] libmachine: (ha-174833-m03)     </rng>
	I1030 18:41:12.942907  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942929  400041 main.go:141] libmachine: (ha-174833-m03)     
	I1030 18:41:12.942938  400041 main.go:141] libmachine: (ha-174833-m03)   </devices>
	I1030 18:41:12.942946  400041 main.go:141] libmachine: (ha-174833-m03) </domain>
	I1030 18:41:12.942957  400041 main.go:141] libmachine: (ha-174833-m03) 
	I1030 18:41:12.949898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:1a:b3:c5 in network default
	I1030 18:41:12.950445  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring networks are active...
	I1030 18:41:12.950469  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:12.951138  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network default is active
	I1030 18:41:12.951462  400041 main.go:141] libmachine: (ha-174833-m03) Ensuring network mk-ha-174833 is active
	I1030 18:41:12.951841  400041 main.go:141] libmachine: (ha-174833-m03) Getting domain xml...
	I1030 18:41:12.952538  400041 main.go:141] libmachine: (ha-174833-m03) Creating domain...
	I1030 18:41:14.179359  400041 main.go:141] libmachine: (ha-174833-m03) Waiting to get IP...
	I1030 18:41:14.180307  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.180744  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.180812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.180741  400817 retry.go:31] will retry after 293.822494ms: waiting for machine to come up
	I1030 18:41:14.476270  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.476758  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.476784  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.476703  400817 retry.go:31] will retry after 283.345671ms: waiting for machine to come up
	I1030 18:41:14.761301  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:14.761803  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:14.761833  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:14.761750  400817 retry.go:31] will retry after 299.766753ms: waiting for machine to come up
	I1030 18:41:15.063146  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.063613  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.063642  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.063557  400817 retry.go:31] will retry after 490.461635ms: waiting for machine to come up
	I1030 18:41:15.557014  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:15.557549  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:15.557577  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:15.557492  400817 retry.go:31] will retry after 739.117277ms: waiting for machine to come up
	I1030 18:41:16.298461  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.298926  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.298956  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.298870  400817 retry.go:31] will retry after 666.546188ms: waiting for machine to come up
	I1030 18:41:16.966687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:16.967172  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:16.967200  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:16.967117  400817 retry.go:31] will retry after 846.088379ms: waiting for machine to come up
	I1030 18:41:17.814898  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:17.815410  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:17.815440  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:17.815362  400817 retry.go:31] will retry after 1.085711576s: waiting for machine to come up
	I1030 18:41:18.902574  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:18.902922  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:18.902952  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:18.902876  400817 retry.go:31] will retry after 1.834126575s: waiting for machine to come up
	I1030 18:41:20.739528  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:20.739890  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:20.739919  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:20.739850  400817 retry.go:31] will retry after 2.105862328s: waiting for machine to come up
	I1030 18:41:22.847426  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:22.847835  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:22.847867  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:22.847766  400817 retry.go:31] will retry after 2.441796021s: waiting for machine to come up
	I1030 18:41:25.291422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:25.291864  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:25.291888  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:25.291812  400817 retry.go:31] will retry after 2.18908754s: waiting for machine to come up
	I1030 18:41:27.484272  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:27.484720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:27.484740  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:27.484674  400817 retry.go:31] will retry after 3.249594938s: waiting for machine to come up
	I1030 18:41:30.735386  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:30.735687  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find current IP address of domain ha-174833-m03 in network mk-ha-174833
	I1030 18:41:30.735711  400041 main.go:141] libmachine: (ha-174833-m03) DBG | I1030 18:41:30.735669  400817 retry.go:31] will retry after 5.542117345s: waiting for machine to come up
	I1030 18:41:36.279557  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.279987  400041 main.go:141] libmachine: (ha-174833-m03) Found IP for machine: 192.168.39.238
	I1030 18:41:36.280005  400041 main.go:141] libmachine: (ha-174833-m03) Reserving static IP address...
	I1030 18:41:36.280019  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.280379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "ha-174833-m03", mac: "52:54:00:76:9d:ad", ip: "192.168.39.238"} in network mk-ha-174833
	I1030 18:41:36.353555  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:36.353581  400041 main.go:141] libmachine: (ha-174833-m03) Reserved static IP address: 192.168.39.238
	I1030 18:41:36.353628  400041 main.go:141] libmachine: (ha-174833-m03) Waiting for SSH to be available...
	I1030 18:41:36.356187  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:36.356543  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833
	I1030 18:41:36.356569  400041 main.go:141] libmachine: (ha-174833-m03) DBG | unable to find defined IP address of network mk-ha-174833 interface with MAC address 52:54:00:76:9d:ad
	I1030 18:41:36.356719  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:36.356745  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:36.356795  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:36.356814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:36.356847  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:36.360778  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: exit status 255: 
	I1030 18:41:36.360804  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1030 18:41:36.360814  400041 main.go:141] libmachine: (ha-174833-m03) DBG | command : exit 0
	I1030 18:41:36.360821  400041 main.go:141] libmachine: (ha-174833-m03) DBG | err     : exit status 255
	I1030 18:41:36.360832  400041 main.go:141] libmachine: (ha-174833-m03) DBG | output  : 
	I1030 18:41:39.361300  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Getting to WaitForSSH function...
	I1030 18:41:39.363671  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364021  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.364051  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.364131  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH client type: external
	I1030 18:41:39.364170  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa (-rw-------)
	I1030 18:41:39.364209  400041 main.go:141] libmachine: (ha-174833-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 18:41:39.364227  400041 main.go:141] libmachine: (ha-174833-m03) DBG | About to run SSH command:
	I1030 18:41:39.364236  400041 main.go:141] libmachine: (ha-174833-m03) DBG | exit 0
	I1030 18:41:39.498991  400041 main.go:141] libmachine: (ha-174833-m03) DBG | SSH cmd err, output: <nil>: 
	I1030 18:41:39.499302  400041 main.go:141] libmachine: (ha-174833-m03) KVM machine creation complete!
	I1030 18:41:39.499653  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:39.500359  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500567  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:39.500834  400041 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 18:41:39.500852  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetState
	I1030 18:41:39.502063  400041 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 18:41:39.502076  400041 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 18:41:39.502081  400041 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 18:41:39.502086  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.504584  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.504838  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.504860  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.505021  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.505207  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.505493  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.505642  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.505855  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.505867  400041 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 18:41:39.613705  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.613730  400041 main.go:141] libmachine: Detecting the provisioner...
	I1030 18:41:39.613737  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.616442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616787  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.616812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.616966  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.617171  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617381  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.617494  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.617635  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.617821  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.617831  400041 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 18:41:39.731009  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 18:41:39.731096  400041 main.go:141] libmachine: found compatible host: buildroot
	I1030 18:41:39.731110  400041 main.go:141] libmachine: Provisioning with buildroot...
	I1030 18:41:39.731120  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731355  400041 buildroot.go:166] provisioning hostname "ha-174833-m03"
	I1030 18:41:39.731385  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.731563  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.734727  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735195  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.735225  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.735395  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.735599  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735773  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.735975  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.736185  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.736419  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.736443  400041 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833-m03 && echo "ha-174833-m03" | sudo tee /etc/hostname
	I1030 18:41:39.865251  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833-m03
	
	I1030 18:41:39.865295  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:39.868277  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868776  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.868811  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.868979  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:39.869210  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869426  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:39.869574  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:39.869780  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:39.870007  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:39.870023  400041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:41:39.993047  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:41:39.993077  400041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:41:39.993099  400041 buildroot.go:174] setting up certificates
	I1030 18:41:39.993114  400041 provision.go:84] configureAuth start
	I1030 18:41:39.993127  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetMachineName
	I1030 18:41:39.993439  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:39.996433  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.996840  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:39.996869  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:39.997060  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.000005  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000422  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.000450  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.000565  400041 provision.go:143] copyHostCerts
	I1030 18:41:40.000594  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000629  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:41:40.000638  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:41:40.000698  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:41:40.000806  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000825  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:41:40.000831  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:41:40.000854  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:41:40.000910  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000926  400041 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:41:40.000932  400041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:41:40.000953  400041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:41:40.001003  400041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833-m03 san=[127.0.0.1 192.168.39.238 ha-174833-m03 localhost minikube]
	I1030 18:41:40.389110  400041 provision.go:177] copyRemoteCerts
	I1030 18:41:40.389174  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:41:40.389201  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.391720  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392157  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.392191  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.392466  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.392672  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.392854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.393003  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.485464  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:41:40.485543  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:41:40.513241  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:41:40.513314  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1030 18:41:40.537145  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:41:40.537239  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:41:40.562099  400041 provision.go:87] duration metric: took 568.966283ms to configureAuth
	I1030 18:41:40.562136  400041 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:41:40.562357  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:40.562450  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.565158  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565531  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.565563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.565700  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.565906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566083  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.566192  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.566349  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.566539  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.566554  400041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:41:40.803791  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:41:40.803826  400041 main.go:141] libmachine: Checking connection to Docker...
	I1030 18:41:40.803835  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetURL
	I1030 18:41:40.805073  400041 main.go:141] libmachine: (ha-174833-m03) DBG | Using libvirt version 6000000
	I1030 18:41:40.807111  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807563  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.807592  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.807738  400041 main.go:141] libmachine: Docker is up and running!
	I1030 18:41:40.807756  400041 main.go:141] libmachine: Reticulating splines...
	I1030 18:41:40.807765  400041 client.go:171] duration metric: took 28.27447273s to LocalClient.Create
	I1030 18:41:40.807794  400041 start.go:167] duration metric: took 28.274545509s to libmachine.API.Create "ha-174833"
	I1030 18:41:40.807813  400041 start.go:293] postStartSetup for "ha-174833-m03" (driver="kvm2")
	I1030 18:41:40.807829  400041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:41:40.807854  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:40.808083  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:41:40.808112  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.810446  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810781  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.810810  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.810951  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.811117  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.811251  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.811374  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:40.898250  400041 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:41:40.902639  400041 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:41:40.902670  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:41:40.902762  400041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:41:40.902838  400041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:41:40.902848  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:41:40.902930  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:41:40.911988  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:40.936666  400041 start.go:296] duration metric: took 128.83333ms for postStartSetup
	I1030 18:41:40.936732  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetConfigRaw
	I1030 18:41:40.937356  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:40.939940  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940379  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.940406  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.940740  400041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:41:40.940959  400041 start.go:128] duration metric: took 28.426739922s to createHost
	I1030 18:41:40.940996  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:40.943340  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943659  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:40.943683  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:40.943787  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:40.943992  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944157  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:40.944299  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:40.944469  400041 main.go:141] libmachine: Using SSH client type: native
	I1030 18:41:40.944647  400041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1030 18:41:40.944657  400041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:41:41.054995  400041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313701.035748365
	
	I1030 18:41:41.055025  400041 fix.go:216] guest clock: 1730313701.035748365
	I1030 18:41:41.055036  400041 fix.go:229] Guest: 2024-10-30 18:41:41.035748365 +0000 UTC Remote: 2024-10-30 18:41:40.940974319 +0000 UTC m=+147.695761890 (delta=94.774046ms)
	I1030 18:41:41.055058  400041 fix.go:200] guest clock delta is within tolerance: 94.774046ms
	I1030 18:41:41.055065  400041 start.go:83] releasing machines lock for "ha-174833-m03", held for 28.541005951s
	I1030 18:41:41.055090  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.055377  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:41.057920  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.058257  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.058278  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.060653  400041 out.go:177] * Found network options:
	I1030 18:41:41.062139  400041 out.go:177]   - NO_PROXY=192.168.39.141,192.168.39.67
	W1030 18:41:41.063472  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.063496  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.063508  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064009  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064221  400041 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:41:41.064313  400041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:41:41.064352  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	W1030 18:41:41.064451  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 18:41:41.064473  400041 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 18:41:41.064552  400041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:41:41.064575  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:41:41.066853  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067199  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067222  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067302  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067479  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067664  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.067724  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:41.067749  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:41.067830  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.067906  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:41:41.067978  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.068065  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:41:41.068181  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:41:41.068275  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:41:41.314636  400041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:41:41.321102  400041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:41:41.321173  400041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:41:41.338442  400041 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 18:41:41.338470  400041 start.go:495] detecting cgroup driver to use...
	I1030 18:41:41.338554  400041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:41:41.355526  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:41:41.369752  400041 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:41:41.369824  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:41:41.384658  400041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:41:41.399117  400041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:41:41.515988  400041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:41:41.659854  400041 docker.go:233] disabling docker service ...
	I1030 18:41:41.659940  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:41:41.675386  400041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:41:41.688521  400041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:41:41.830998  400041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:41:41.962743  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:41:41.976734  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:41:41.998554  400041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:41:41.998635  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.010835  400041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:41:42.010904  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.022771  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.033993  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.044518  400041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:41:42.055581  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.065838  400041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.082685  400041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:41:42.092911  400041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:41:42.102341  400041 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 18:41:42.102398  400041 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 18:41:42.115321  400041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:41:42.125073  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:42.255762  400041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:41:42.348340  400041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:41:42.348402  400041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:41:42.353645  400041 start.go:563] Will wait 60s for crictl version
	I1030 18:41:42.353700  400041 ssh_runner.go:195] Run: which crictl
	I1030 18:41:42.357362  400041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:41:42.403194  400041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:41:42.403278  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.433073  400041 ssh_runner.go:195] Run: crio --version
	I1030 18:41:42.461144  400041 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:41:42.462700  400041 out.go:177]   - env NO_PROXY=192.168.39.141
	I1030 18:41:42.464361  400041 out.go:177]   - env NO_PROXY=192.168.39.141,192.168.39.67
	I1030 18:41:42.465724  400041 main.go:141] libmachine: (ha-174833-m03) Calling .GetIP
	I1030 18:41:42.468442  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.468785  400041 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:41:42.468812  400041 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:41:42.469009  400041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:41:42.473316  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:42.486401  400041 mustload.go:65] Loading cluster: ha-174833
	I1030 18:41:42.486671  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:41:42.487004  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.487051  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.503315  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1030 18:41:42.503812  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.504381  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.504403  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.504715  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.504885  400041 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:41:42.506310  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:42.506684  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:42.506729  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:42.521795  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I1030 18:41:42.522246  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:42.522834  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:42.522857  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:42.523225  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:42.523429  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:42.523593  400041 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.238
	I1030 18:41:42.523605  400041 certs.go:194] generating shared ca certs ...
	I1030 18:41:42.523621  400041 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.523781  400041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:41:42.523832  400041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:41:42.523846  400041 certs.go:256] generating profile certs ...
	I1030 18:41:42.523984  400041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:41:42.524022  400041 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7
	I1030 18:41:42.524044  400041 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.238 192.168.39.254]
	I1030 18:41:42.771082  400041 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 ...
	I1030 18:41:42.771143  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7: {Name:mkbb8ab8bf6c18d6d6a31970e3b828800b8fd44f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771350  400041 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 ...
	I1030 18:41:42.771369  400041 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7: {Name:mk93a1175526096093ebe70ea08ba926787709bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:41:42.771474  400041 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:41:42.771640  400041 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.48de31c7 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:41:42.771819  400041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:41:42.771839  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:41:42.771859  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:41:42.771878  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:41:42.771897  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:41:42.771916  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:41:42.771935  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:41:42.771953  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:41:42.786601  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:41:42.786716  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:41:42.786768  400041 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:41:42.786783  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:41:42.786818  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:41:42.786855  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:41:42.786886  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:41:42.786944  400041 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:41:42.786987  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:41:42.787011  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:42.787031  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:41:42.787082  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:42.790022  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790433  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:42.790463  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:42.790635  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:42.790863  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:42.791005  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:42.791117  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:42.862993  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1030 18:41:42.869116  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1030 18:41:42.881084  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1030 18:41:42.885608  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1030 18:41:42.896066  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1030 18:41:42.900395  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1030 18:41:42.911415  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1030 18:41:42.915680  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1030 18:41:42.926002  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1030 18:41:42.929978  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1030 18:41:42.939948  400041 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1030 18:41:42.944073  400041 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1030 18:41:42.954991  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:41:42.979919  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:41:43.004284  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:41:43.027671  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:41:43.050807  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1030 18:41:43.073405  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 18:41:43.097875  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:41:43.121491  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:41:43.145484  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:41:43.169567  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:41:43.194113  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:41:43.217839  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1030 18:41:43.235214  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1030 18:41:43.251678  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1030 18:41:43.267891  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1030 18:41:43.283793  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1030 18:41:43.301477  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1030 18:41:43.319112  400041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1030 18:41:43.336222  400041 ssh_runner.go:195] Run: openssl version
	I1030 18:41:43.342021  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:41:43.353281  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357881  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.357947  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:41:43.363573  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:41:43.375497  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:41:43.389049  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393551  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.393616  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:41:43.399295  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:41:43.411090  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:41:43.422010  400041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426629  400041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.426687  400041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:41:43.432334  400041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:41:43.443256  400041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:41:43.447278  400041 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 18:41:43.447336  400041 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.2 crio true true} ...
	I1030 18:41:43.447423  400041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:41:43.447453  400041 kube-vip.go:115] generating kube-vip config ...
	I1030 18:41:43.447481  400041 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:41:43.463867  400041 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:41:43.463938  400041 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:41:43.463993  400041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.474999  400041 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1030 18:41:43.475044  400041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1030 18:41:43.485456  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1030 18:41:43.485479  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1030 18:41:43.485533  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1030 18:41:43.485545  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485517  400041 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1030 18:41:43.485603  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1030 18:41:43.485621  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:41:43.504131  400041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504186  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1030 18:41:43.504223  400041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1030 18:41:43.504222  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1030 18:41:43.504237  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1030 18:41:43.504267  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1030 18:41:43.522121  400041 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1030 18:41:43.522169  400041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1030 18:41:44.375482  400041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1030 18:41:44.387138  400041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1030 18:41:44.405486  400041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:41:44.422728  400041 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:41:44.439060  400041 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:41:44.443074  400041 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 18:41:44.455364  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:41:44.570256  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:41:44.588522  400041 host.go:66] Checking if "ha-174833" exists ...
	I1030 18:41:44.589080  400041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:41:44.589146  400041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:41:44.605625  400041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 18:41:44.606088  400041 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:41:44.606626  400041 main.go:141] libmachine: Using API Version  1
	I1030 18:41:44.606648  400041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:41:44.607023  400041 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:41:44.607225  400041 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:41:44.607369  400041 start.go:317] joinCluster: &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:41:44.607505  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 18:41:44.607526  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:41:44.610554  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611109  400041 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:41:44.611135  400041 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:41:44.611433  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:41:44.611606  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:41:44.611760  400041 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:41:44.611885  400041 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:41:44.773784  400041 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:41:44.773850  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443"
	I1030 18:42:06.433926  400041 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbea4g.48iwo9ov7hdxpf8l --discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174833-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443": (21.660034767s)
	I1030 18:42:06.433968  400041 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 18:42:06.995847  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174833-m03 minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=ha-174833 minikube.k8s.io/primary=false
	I1030 18:42:07.135527  400041 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174833-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1030 18:42:07.266435  400041 start.go:319] duration metric: took 22.659060991s to joinCluster
	I1030 18:42:07.266542  400041 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 18:42:07.266874  400041 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:42:07.267989  400041 out.go:177] * Verifying Kubernetes components...
	I1030 18:42:07.269832  400041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:42:07.538532  400041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:42:07.566640  400041 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:42:07.566990  400041 kapi.go:59] client config for ha-174833: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1030 18:42:07.567153  400041 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.141:8443
	I1030 18:42:07.567517  400041 node_ready.go:35] waiting up to 6m0s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:07.567636  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:07.567647  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:07.567658  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:07.567663  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:07.571044  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.067840  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.067866  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.067875  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.067880  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.071548  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:08.568423  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:08.568445  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:08.568456  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:08.568468  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:08.572275  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:09.068213  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.068244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.068255  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.068261  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.072412  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.568601  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:09.568687  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:09.568704  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:09.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:09.572953  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:09.573669  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:10.068646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.068674  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.068686  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.068690  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.072592  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:10.568186  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:10.568212  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:10.568228  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:10.568234  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:10.571345  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:11.068394  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.068419  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.068430  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.068435  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.071353  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:11.568540  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:11.568569  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:11.568581  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:11.568586  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:11.571615  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.068128  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.068184  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.068198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.068204  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.072054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:12.072920  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:12.568764  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:12.568788  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:12.568799  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:12.568804  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:12.572509  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:13.067810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.067840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.067852  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.067858  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.072370  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:13.568096  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:13.568118  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:13.568127  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:13.568130  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:13.571713  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.068692  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.068715  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.068724  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.068728  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.072113  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:14.073045  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:14.568414  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:14.568441  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:14.568458  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:14.568463  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:14.571979  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:15.067728  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.067752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.067760  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.067764  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.079108  400041 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1030 18:42:15.568483  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:15.568509  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:15.568518  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:15.568523  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:15.571981  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.067933  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.067953  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.067962  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.067965  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.071179  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.568646  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:16.568671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:16.568684  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:16.568691  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:16.571923  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:16.572720  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:17.068520  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.068545  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.068561  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.068566  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.072118  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:17.568073  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:17.568108  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:17.568118  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:17.568123  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:17.571265  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.068409  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.068434  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.068442  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.068447  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.071717  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:18.568497  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:18.568527  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:18.568540  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:18.568546  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:18.571867  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.067827  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.067850  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.067859  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.067863  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.070951  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:19.071706  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:19.568087  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:19.568110  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:19.568119  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:19.568122  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:19.571495  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.068028  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.068053  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.068064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.068071  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.071582  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:20.568136  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:20.568161  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:20.568169  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:20.568174  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:20.571551  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.068612  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.068640  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.068652  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.068657  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.072026  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:21.072659  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:21.568033  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:21.568055  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:21.568064  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:21.568069  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:21.571332  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.067937  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.067961  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.067970  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.067976  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.071718  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:22.568117  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:22.568139  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:22.568147  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:22.568155  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:22.571493  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.068511  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.068548  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.068558  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.068562  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.071664  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.568675  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:23.568699  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:23.568707  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:23.568711  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:23.571937  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:23.572572  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:24.067899  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.067922  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.067931  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.067934  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.071366  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:24.568317  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:24.568342  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:24.568351  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:24.568355  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:24.571501  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.067773  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.067796  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.067803  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.067806  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.071344  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.568753  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:25.568775  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:25.568783  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:25.568787  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:25.572126  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:25.572899  400041 node_ready.go:53] node "ha-174833-m03" has status "Ready":"False"
	I1030 18:42:26.068223  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.068246  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.068257  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.068262  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.072464  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:26.073313  400041 node_ready.go:49] node "ha-174833-m03" has status "Ready":"True"
	I1030 18:42:26.073333  400041 node_ready.go:38] duration metric: took 18.505796326s for node "ha-174833-m03" to be "Ready" ...
	I1030 18:42:26.073343  400041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:26.073412  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:26.073421  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.073428  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.073435  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.079519  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:26.085610  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.085695  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qrkkc
	I1030 18:42:26.085704  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.085711  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.085715  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.088406  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.089109  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.089127  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.089137  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.089143  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.091504  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.092047  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.092069  400041 pod_ready.go:82] duration metric: took 6.435195ms for pod "coredns-7c65d6cfc9-qrkkc" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092082  400041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.092150  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tnj67
	I1030 18:42:26.092160  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.092170  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.092179  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.095058  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.095704  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.095720  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.095730  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.095735  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.098085  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.098596  400041 pod_ready.go:93] pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.098614  400041 pod_ready.go:82] duration metric: took 6.524633ms for pod "coredns-7c65d6cfc9-tnj67" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098625  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.098689  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833
	I1030 18:42:26.098701  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.098708  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.098714  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.101151  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.101737  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.101752  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.101762  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.101769  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.103823  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.104381  400041 pod_ready.go:93] pod "etcd-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.104404  400041 pod_ready.go:82] duration metric: took 5.771643ms for pod "etcd-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104417  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.104487  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m02
	I1030 18:42:26.104498  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.104507  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.104515  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.106840  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.107295  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:26.107308  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.107318  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.107325  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.109492  400041 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 18:42:26.109917  400041 pod_ready.go:93] pod "etcd-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.109932  400041 pod_ready.go:82] duration metric: took 5.508285ms for pod "etcd-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.109947  400041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.268296  400041 request.go:632] Waited for 158.281409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268393  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174833-m03
	I1030 18:42:26.268404  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.268413  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.268419  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.272054  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.469115  400041 request.go:632] Waited for 196.339916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469175  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:26.469180  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.469190  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.469198  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.472781  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.473415  400041 pod_ready.go:93] pod "etcd-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.473441  400041 pod_ready.go:82] duration metric: took 363.484662ms for pod "etcd-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.473458  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.668901  400041 request.go:632] Waited for 195.3359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669000  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833
	I1030 18:42:26.669014  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.669026  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.669034  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.672627  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.868738  400041 request.go:632] Waited for 195.360312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868832  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:26.868840  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:26.868851  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:26.868860  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:26.872228  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:26.872778  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:26.872812  400041 pod_ready.go:82] duration metric: took 399.338189ms for pod "kube-apiserver-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:26.872828  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.068798  400041 request.go:632] Waited for 195.855457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068879  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m02
	I1030 18:42:27.068887  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.068898  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.068909  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.072321  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.269235  400041 request.go:632] Waited for 196.216042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269319  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:27.269330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.269343  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.269353  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.272769  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.273439  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.273459  400041 pod_ready.go:82] duration metric: took 400.623063ms for pod "kube-apiserver-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.273469  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.468256  400041 request.go:632] Waited for 194.693367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468325  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174833-m03
	I1030 18:42:27.468330  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.468338  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.468347  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.471734  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.669102  400041 request.go:632] Waited for 196.461533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669185  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:27.669197  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.669208  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.669216  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.672818  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:27.673832  400041 pod_ready.go:93] pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:27.673854  400041 pod_ready.go:82] duration metric: took 400.378216ms for pod "kube-apiserver-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.673876  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:27.868940  400041 request.go:632] Waited for 194.958773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869030  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833
	I1030 18:42:27.869042  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:27.869053  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:27.869060  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:27.872180  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.068264  400041 request.go:632] Waited for 195.290526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068332  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:28.068351  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.068362  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.068370  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.071658  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.072242  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.072265  400041 pod_ready.go:82] duration metric: took 398.381976ms for pod "kube-controller-manager-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.072276  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.268211  400041 request.go:632] Waited for 195.804533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268292  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m02
	I1030 18:42:28.268300  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.268311  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.268318  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.271496  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.468870  400041 request.go:632] Waited for 196.361357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468956  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:28.468962  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.468977  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.468987  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.472341  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.472906  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.472925  400041 pod_ready.go:82] duration metric: took 400.642779ms for pod "kube-controller-manager-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.472940  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.669072  400041 request.go:632] Waited for 196.028852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669156  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174833-m03
	I1030 18:42:28.669168  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.669179  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.669191  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.673097  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.868210  400041 request.go:632] Waited for 194.307626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868287  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:28.868295  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:28.868307  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:28.868338  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:28.871679  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:28.872327  400041 pod_ready.go:93] pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:28.872352  400041 pod_ready.go:82] duration metric: took 399.404321ms for pod "kube-controller-manager-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:28.872369  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.068267  400041 request.go:632] Waited for 195.816492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068356  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2qt2n
	I1030 18:42:29.068367  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.068376  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.068388  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.072060  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.269102  400041 request.go:632] Waited for 196.354313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269167  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:29.269172  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.269181  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.269186  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.273078  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.273532  400041 pod_ready.go:93] pod "kube-proxy-2qt2n" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.273551  400041 pod_ready.go:82] duration metric: took 401.170636ms for pod "kube-proxy-2qt2n" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.273567  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.468616  400041 request.go:632] Waited for 194.925869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7l7z
	I1030 18:42:29.468712  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.468722  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.468730  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.472234  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.669266  400041 request.go:632] Waited for 196.242195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669324  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:29.669331  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.669341  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.669348  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.673010  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:29.674076  400041 pod_ready.go:93] pod "kube-proxy-g7l7z" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:29.674097  400041 pod_ready.go:82] duration metric: took 400.523192ms for pod "kube-proxy-g7l7z" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.674108  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:29.869286  400041 request.go:632] Waited for 195.064443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869374  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2st
	I1030 18:42:29.869384  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:29.869393  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:29.869397  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:29.872765  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.068849  400041 request.go:632] Waited for 195.380036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068912  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.068917  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.068926  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.068930  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.073076  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:30.073910  400041 pod_ready.go:93] pod "kube-proxy-hg2st" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.073931  400041 pod_ready.go:82] duration metric: took 399.816887ms for pod "kube-proxy-hg2st" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.073942  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.269092  400041 request.go:632] Waited for 195.075688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269158  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833
	I1030 18:42:30.269163  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.269171  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.269174  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.272728  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.468827  400041 request.go:632] Waited for 195.469933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468924  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833
	I1030 18:42:30.468935  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.468944  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.468948  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.472792  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.473256  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.473274  400041 pod_ready.go:82] duration metric: took 399.325616ms for pod "kube-scheduler-ha-174833" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.473285  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.668281  400041 request.go:632] Waited for 194.899722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668360  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m02
	I1030 18:42:30.668369  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.668378  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.668386  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.672074  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.869270  400041 request.go:632] Waited for 196.355231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869340  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m02
	I1030 18:42:30.869345  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:30.869354  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:30.869361  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:30.873235  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:30.873666  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:30.873686  400041 pod_ready.go:82] duration metric: took 400.39483ms for pod "kube-scheduler-ha-174833-m02" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:30.873697  400041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.068802  400041 request.go:632] Waited for 195.002943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068869  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174833-m03
	I1030 18:42:31.068875  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.068884  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.068901  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.072579  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.268662  400041 request.go:632] Waited for 195.353177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268730  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/ha-174833-m03
	I1030 18:42:31.268736  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.268743  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.268749  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.272045  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.272702  400041 pod_ready.go:93] pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace has status "Ready":"True"
	I1030 18:42:31.272721  400041 pod_ready.go:82] duration metric: took 399.01745ms for pod "kube-scheduler-ha-174833-m03" in "kube-system" namespace to be "Ready" ...
	I1030 18:42:31.272733  400041 pod_ready.go:39] duration metric: took 5.199380679s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 18:42:31.272749  400041 api_server.go:52] waiting for apiserver process to appear ...
	I1030 18:42:31.272802  400041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 18:42:31.290132  400041 api_server.go:72] duration metric: took 24.023548522s to wait for apiserver process to appear ...
	I1030 18:42:31.290159  400041 api_server.go:88] waiting for apiserver healthz status ...
	I1030 18:42:31.290180  400041 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1030 18:42:31.295173  400041 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1030 18:42:31.295236  400041 round_trippers.go:463] GET https://192.168.39.141:8443/version
	I1030 18:42:31.295244  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.295252  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.295257  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.296242  400041 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1030 18:42:31.296313  400041 api_server.go:141] control plane version: v1.31.2
	I1030 18:42:31.296329  400041 api_server.go:131] duration metric: took 6.164986ms to wait for apiserver health ...
	I1030 18:42:31.296336  400041 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 18:42:31.468748  400041 request.go:632] Waited for 172.312716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468810  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.468815  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.468822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.468826  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.475257  400041 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 18:42:31.481661  400041 system_pods.go:59] 24 kube-system pods found
	I1030 18:42:31.481688  400041 system_pods.go:61] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.481693  400041 system_pods.go:61] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.481699  400041 system_pods.go:61] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.481705  400041 system_pods.go:61] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.481710  400041 system_pods.go:61] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.481715  400041 system_pods.go:61] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.481720  400041 system_pods.go:61] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.481728  400041 system_pods.go:61] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.481733  400041 system_pods.go:61] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.481740  400041 system_pods.go:61] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.481749  400041 system_pods.go:61] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.481754  400041 system_pods.go:61] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.481762  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.481768  400041 system_pods.go:61] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.481776  400041 system_pods.go:61] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.481781  400041 system_pods.go:61] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.481789  400041 system_pods.go:61] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.481794  400041 system_pods.go:61] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.481802  400041 system_pods.go:61] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.481807  400041 system_pods.go:61] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.481814  400041 system_pods.go:61] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.481819  400041 system_pods.go:61] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.481826  400041 system_pods.go:61] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.481832  400041 system_pods.go:61] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.481843  400041 system_pods.go:74] duration metric: took 185.498428ms to wait for pod list to return data ...
	I1030 18:42:31.481856  400041 default_sa.go:34] waiting for default service account to be created ...
	I1030 18:42:31.668606  400041 request.go:632] Waited for 186.6491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668666  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
	I1030 18:42:31.668671  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.668679  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.668682  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.672056  400041 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 18:42:31.672194  400041 default_sa.go:45] found service account: "default"
	I1030 18:42:31.672209  400041 default_sa.go:55] duration metric: took 190.344386ms for default service account to be created ...
	I1030 18:42:31.672218  400041 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 18:42:31.868735  400041 request.go:632] Waited for 196.405115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868808  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
	I1030 18:42:31.868814  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:31.868822  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:31.868830  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:31.874347  400041 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 18:42:31.881436  400041 system_pods.go:86] 24 kube-system pods found
	I1030 18:42:31.881470  400041 system_pods.go:89] "coredns-7c65d6cfc9-qrkkc" [3470734c-61ab-4cd9-a026-f07d5ca6a290] Running
	I1030 18:42:31.881477  400041 system_pods.go:89] "coredns-7c65d6cfc9-tnj67" [f869042d-e37b-414d-ab11-422e933e2952] Running
	I1030 18:42:31.881483  400041 system_pods.go:89] "etcd-ha-174833" [af22e3f8-298d-4ab8-ac7d-cf6b915858ce] Running
	I1030 18:42:31.881487  400041 system_pods.go:89] "etcd-ha-174833-m02" [d5fdc5b9-47c3-4308-8b03-e01254a915cd] Running
	I1030 18:42:31.881490  400041 system_pods.go:89] "etcd-ha-174833-m03" [0978241d-3129-4232-a77d-1c363be4759d] Running
	I1030 18:42:31.881496  400041 system_pods.go:89] "kindnet-b76pd" [5267869f-1e5c-414a-9adc-8cdb3a645222] Running
	I1030 18:42:31.881501  400041 system_pods.go:89] "kindnet-pm48g" [7c2c95da-7659-43c4-aae2-3af6fbe9515c] Running
	I1030 18:42:31.881507  400041 system_pods.go:89] "kindnet-rlzbn" [74a207e7-cdd5-4c43-a668-9a6445d0bacf] Running
	I1030 18:42:31.881516  400041 system_pods.go:89] "kube-apiserver-ha-174833" [e1f4e473-d722-4cbc-a8ae-e80612823895] Running
	I1030 18:42:31.881521  400041 system_pods.go:89] "kube-apiserver-ha-174833-m02" [d2f5f227-a891-4678-b8e8-a51051bc80ac] Running
	I1030 18:42:31.881529  400041 system_pods.go:89] "kube-apiserver-ha-174833-m03" [a0d096d1-760f-48eb-b8b4-a8d2bfa98602] Running
	I1030 18:42:31.881538  400041 system_pods.go:89] "kube-controller-manager-ha-174833" [b95baef4-2d97-4714-903f-8a63c8f1f647] Running
	I1030 18:42:31.881547  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m02" [5293b489-8a8c-4475-b549-d10e1a631aa6] Running
	I1030 18:42:31.881551  400041 system_pods.go:89] "kube-controller-manager-ha-174833-m03" [9427a2c0-a305-49d6-8bfd-3829dfc7c9e9] Running
	I1030 18:42:31.881555  400041 system_pods.go:89] "kube-proxy-2qt2n" [fcc90b13-926f-4a7e-aa12-3e274c0777f6] Running
	I1030 18:42:31.881559  400041 system_pods.go:89] "kube-proxy-g7l7z" [7779db09-05ba-46f2-9e0e-09f6bcd57537] Running
	I1030 18:42:31.881563  400041 system_pods.go:89] "kube-proxy-hg2st" [99ac908a-6d13-4471-a54b-bede7bde77b7] Running
	I1030 18:42:31.881568  400041 system_pods.go:89] "kube-scheduler-ha-174833" [d0d4afaa-39ee-4b4d-afdb-3a41effecd4c] Running
	I1030 18:42:31.881574  400041 system_pods.go:89] "kube-scheduler-ha-174833-m02" [f0cd9106-8c0d-4c16-a83f-3d5e84d7d813] Running
	I1030 18:42:31.881580  400041 system_pods.go:89] "kube-scheduler-ha-174833-m03" [9e19320f-40e5-4953-a8f9-21e8251bddbf] Running
	I1030 18:42:31.881585  400041 system_pods.go:89] "kube-vip-ha-174833" [c4903552-8a15-4dc2-ba98-c3d10b8f3288] Running
	I1030 18:42:31.881589  400041 system_pods.go:89] "kube-vip-ha-174833-m02" [9d43181d-55d3-482c-bd30-df3059f68edd] Running
	I1030 18:42:31.881595  400041 system_pods.go:89] "kube-vip-ha-174833-m03" [c016e01f-6a40-419e-acf5-a5e595c117cd] Running
	I1030 18:42:31.881600  400041 system_pods.go:89] "storage-provisioner" [8e6d1d5e-1944-4c77-8018-42bc526984c2] Running
	I1030 18:42:31.881612  400041 system_pods.go:126] duration metric: took 209.387873ms to wait for k8s-apps to be running ...
	I1030 18:42:31.881626  400041 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 18:42:31.881679  400041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 18:42:31.897108  400041 system_svc.go:56] duration metric: took 15.46981ms WaitForService to wait for kubelet
	I1030 18:42:31.897150  400041 kubeadm.go:582] duration metric: took 24.630565695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:42:31.897179  400041 node_conditions.go:102] verifying NodePressure condition ...
	I1030 18:42:32.068632  400041 request.go:632] Waited for 171.354733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068703  400041 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
	I1030 18:42:32.068708  400041 round_trippers.go:469] Request Headers:
	I1030 18:42:32.068716  400041 round_trippers.go:473]     Accept: application/json, */*
	I1030 18:42:32.068721  400041 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 18:42:32.073422  400041 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 18:42:32.074348  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074387  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074400  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074404  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074408  400041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 18:42:32.074412  400041 node_conditions.go:123] node cpu capacity is 2
	I1030 18:42:32.074421  400041 node_conditions.go:105] duration metric: took 177.235852ms to run NodePressure ...
	I1030 18:42:32.074439  400041 start.go:241] waiting for startup goroutines ...
	I1030 18:42:32.074466  400041 start.go:255] writing updated cluster config ...
	I1030 18:42:32.074805  400041 ssh_runner.go:195] Run: rm -f paused
	I1030 18:42:32.127386  400041 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 18:42:32.129289  400041 out.go:177] * Done! kubectl is now configured to use "ha-174833" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.056808863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313995056787933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e829043b-55e3-425f-99c8-3d7d14ef82b4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.057325406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4b9940c-a307-4503-8b9b-8d28bd0dbb85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.057402881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4b9940c-a307-4503-8b9b-8d28bd0dbb85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.057607003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4b9940c-a307-4503-8b9b-8d28bd0dbb85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.099437049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d4a19c7-d791-4791-916a-ffb9a16b50d1 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.099528832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d4a19c7-d791-4791-916a-ffb9a16b50d1 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.100969248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d323918-23d0-4663-9311-de37a4d97dbb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.101449505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313995101419110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d323918-23d0-4663-9311-de37a4d97dbb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.102563418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78e3e3ff-6c81-4e98-b426-aabcafe137e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.102624173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78e3e3ff-6c81-4e98-b426-aabcafe137e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.102833809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78e3e3ff-6c81-4e98-b426-aabcafe137e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.144394830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e4ce690-ca6e-47dc-9839-8e0775303055 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.144466398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e4ce690-ca6e-47dc-9839-8e0775303055 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.145876577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8db6f3f-7c5c-446f-bac0-b5b256456c39 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.146502689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313995146433480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8db6f3f-7c5c-446f-bac0-b5b256456c39 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.147502352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc303fda-9ec3-4364-85d1-3cd87d4bf556 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.147555238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc303fda-9ec3-4364-85d1-3cd87d4bf556 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.147838001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc303fda-9ec3-4364-85d1-3cd87d4bf556 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.188603042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a73cea5-4ad6-41a3-9593-c75955d2b1d1 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.188673480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a73cea5-4ad6-41a3-9593-c75955d2b1d1 name=/runtime.v1.RuntimeService/Version
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.190092992Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7dbcf44-6f3d-4819-b796-a3ffe3ce9a54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.190933263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313995190868495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7dbcf44-6f3d-4819-b796-a3ffe3ce9a54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.191519512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87d3638b-735f-41bc-b0da-6e3e45a9a216 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.191568292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87d3638b-735f-41bc-b0da-6e3e45a9a216 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 18:46:35 ha-174833 crio[664]: time="2024-10-30 18:46:35.191777875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009,PodSandboxId:4b32508187feda9fb89e9db472edcc9269380d89c7fabdce7ea42c6bd040cd72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615217360825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tnj67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f869042d-e37b-414d-ab11-422e933e2952,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6694cd6bc9e3c41a78c9a5de80c4b95e2545fc061eac062221b2fe3ef4aadb3,PodSandboxId:e4daca50f6e1cba7757e2725bf95ff4bf83227ca7fa99966ead6cd77711dcace,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730313615138289233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 8e6d1d5e-1944-4c77-8018-42bc526984c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f,PodSandboxId:80f0d2bac7bdbf4e7081698c44c0100f1844741b0105b2c4afc4373e50786bd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730313615087765259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3470734c-6
1ab-4cd9-a026-f07d5ca6a290,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef,PodSandboxId:4a4a82673e78f30d1d9716c24cfa484d7ad4d27f53c2f74c27f2b30910b16f1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:C
ONTAINER_RUNNING,CreatedAt:1730313603325015951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pm48g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c2c95da-7659-43c4-aae2-3af6fbe9515c,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740,PodSandboxId:5d414abeb9a8ee9776eede410c7e6863d30e3e79eead9ef5eaf48e3f823ff1eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:173031359
7433571078,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2qt2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc90b13-926f-4a7e-aa12-3e274c0777f6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a,PodSandboxId:635aa65f78ff8602bbbd26a8ca9fa00e6475baebf133de26e2e88e93e579621b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:17303135896
66156759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea5be4e46ad37a31e0d88b2c9c0158c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6,PodSandboxId:2a80897d4d6989c628408425633aea48eaf1f123b026cb7a29325fbfb4eb2bba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730313585751314744,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca112c63e30eb433bb700ecb5b5894a5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c,PodSandboxId:aa574b692710d467b2e8775397cfa8db07704e00fb33c11512bf39c82e680973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730313585689189484,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229cc2e30ddb1f711193989fd13f4f67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73,PodSandboxId:bc13396acc704797f09483572e0822fe63133214ccd6fcb11477cb808dbc282f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730313585724076197,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bc339147b21477e295945da9cb0b0e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb,PodSandboxId:a4e686c5a4e0593c382829bac8d2451984a6e41381d31fb1620855986434050a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730313585675039957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-174833,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c6d80d0e594b993bec02307d522c1a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87d3638b-735f-41bc-b0da-6e3e45a9a216 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b50f8293a0eac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   4b32508187fed       coredns-7c65d6cfc9-tnj67
	b6694cd6bc9e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     6 minutes ago       Running             storage-provisioner       0                   e4daca50f6e1c       storage-provisioner
	80919506252b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                     6 minutes ago       Running             coredns                   0                   80f0d2bac7bdb       coredns-7c65d6cfc9-qrkkc
	46301d1401a14       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16   6 minutes ago       Running             kindnet-cni               0                   4a4a82673e78f       kindnet-pm48g
	634060e657ba2       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                     6 minutes ago       Running             kube-proxy                0                   5d414abeb9a8e       kube-proxy-2qt2n
	da8b9126272c4       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215    6 minutes ago       Running             kube-vip                  0                   635aa65f78ff8       kube-vip-ha-174833
	6f0fb508f1f86       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                     6 minutes ago       Running             kube-scheduler            0                   2a80897d4d698       kube-scheduler-ha-174833
	db863ebdc17e0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                     6 minutes ago       Running             kube-controller-manager   0                   bc13396acc704       kube-controller-manager-ha-174833
	381be95e92ca6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     6 minutes ago       Running             etcd                      0                   aa574b692710d       etcd-ha-174833
	661ed7108dbf5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                     6 minutes ago       Running             kube-apiserver            0                   a4e686c5a4e05       kube-apiserver-ha-174833
	
	
	==> coredns [80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f] <==
	[INFO] 10.244.2.2:49872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260615s
	[INFO] 10.244.2.2:45985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000215389s
	[INFO] 10.244.1.3:58699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184263s
	[INFO] 10.244.1.3:36745 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223993s
	[INFO] 10.244.1.3:52696 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197445s
	[INFO] 10.244.1.3:51136 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008496656s
	[INFO] 10.244.1.3:37326 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170193s
	[INFO] 10.244.2.2:41356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001504514s
	[INFO] 10.244.2.2:58448 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121598s
	[INFO] 10.244.2.2:57683 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115706s
	[INFO] 10.244.1.2:44356 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001773314s
	[INFO] 10.244.1.2:53338 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092182s
	[INFO] 10.244.1.2:36505 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123936s
	[INFO] 10.244.1.2:50770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129391s
	[INFO] 10.244.1.3:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119608s
	[INFO] 10.244.1.3:38056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104793s
	[INFO] 10.244.2.2:56050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001014289s
	[INFO] 10.244.2.2:46354 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094957s
	[INFO] 10.244.1.2:43247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140652s
	[INFO] 10.244.1.3:59260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286102s
	[INFO] 10.244.1.3:42613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177355s
	[INFO] 10.244.2.2:38778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139553s
	[INFO] 10.244.2.2:55445 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162449s
	[INFO] 10.244.1.2:49123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103971s
	[INFO] 10.244.1.2:36025 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103655s
	
	
	==> coredns [b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009] <==
	[INFO] 10.244.1.3:35936 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006730126s
	[INFO] 10.244.1.3:52049 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164529s
	[INFO] 10.244.1.3:41429 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145894s
	[INFO] 10.244.2.2:38865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015631s
	[INFO] 10.244.2.2:35468 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001359248s
	[INFO] 10.244.2.2:39539 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154504s
	[INFO] 10.244.2.2:40996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012336s
	[INFO] 10.244.2.2:36394 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103847s
	[INFO] 10.244.1.2:36748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157155s
	[INFO] 10.244.1.2:57168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183772s
	[INFO] 10.244.1.2:44765 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001208743s
	[INFO] 10.244.1.2:51648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094986s
	[INFO] 10.244.1.3:35468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117052s
	[INFO] 10.244.1.3:41666 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093918s
	[INFO] 10.244.2.2:40566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179128s
	[INFO] 10.244.2.2:35306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086624s
	[INFO] 10.244.1.2:54037 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136664s
	[INFO] 10.244.1.2:39370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109182s
	[INFO] 10.244.1.2:41814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123818s
	[INFO] 10.244.1.3:44728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170139s
	[INFO] 10.244.1.3:56805 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142203s
	[INFO] 10.244.2.2:36863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187523s
	[INFO] 10.244.2.2:41661 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120093s
	[INFO] 10.244.1.2:52634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137066s
	[INFO] 10.244.1.2:35418 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120994s
	
	
	==> describe nodes <==
	Name:               ha-174833
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T18_39_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:39:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:39:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:45:28 +0000   Wed, 30 Oct 2024 18:40:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    ha-174833
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ccc5c9f42c54438b6652723644bbeef
	  System UUID:                7ccc5c9f-42c5-4438-b665-2723644bbeef
	  Boot ID:                    83dbe7e6-9d54-44c7-aa42-e17dc8d9a1a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-qrkkc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m39s
	  kube-system                 coredns-7c65d6cfc9-tnj67             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m39s
	  kube-system                 etcd-ha-174833                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m43s
	  kube-system                 kindnet-pm48g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m39s
	  kube-system                 kube-apiserver-ha-174833             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-controller-manager-ha-174833    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-proxy-2qt2n                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-scheduler-ha-174833             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-vip-ha-174833                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m37s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m50s (x7 over 6m50s)  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m50s (x8 over 6m50s)  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m50s (x8 over 6m50s)  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m43s                  kubelet          Node ha-174833 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s                  kubelet          Node ha-174833 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s                  kubelet          Node ha-174833 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m40s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  NodeReady                6m21s                  kubelet          Node ha-174833 status is now: NodeReady
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-174833 event: Registered Node ha-174833 in Controller
	
	
	Name:               ha-174833-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_40_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:40:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:43:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 30 Oct 2024 18:42:45 +0000   Wed, 30 Oct 2024 18:44:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-174833-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44df5dbbd2d444bb8a426278602ee677
	  System UUID:                44df5dbb-d2d4-44bb-8a42-6278602ee677
	  Boot ID:                    360af464-681d-4348-b7f8-dd08e7d88924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mm586                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  default                     busybox-7dff88458-v6kn9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 etcd-ha-174833-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m49s
	  kube-system                 kindnet-rlzbn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m51s
	  kube-system                 kube-apiserver-ha-174833-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-controller-manager-ha-174833-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-proxy-hg2st                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 kube-scheduler-ha-174833-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-vip-ha-174833-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m51s (x8 over 5m51s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x8 over 5m51s)  kubelet          Node ha-174833-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x7 over 5m51s)  kubelet          Node ha-174833-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m50s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-174833-m02 event: Registered Node ha-174833-m02 in Controller
	  Normal  NodeNotReady             2m5s                   node-controller  Node ha-174833-m02 status is now: NodeNotReady
	
	
	Name:               ha-174833-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_42_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:42:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:04 +0000   Wed, 30 Oct 2024 18:42:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-174833-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a25aeed7bbc4bd4a357771ce914b28b
	  System UUID:                8a25aeed-7bbc-4bd4-a357-771ce914b28b
	  Boot ID:                    3552b03e-4535-4240-8adc-99b111c48f7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rzbbm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 etcd-ha-174833-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m31s
	  kube-system                 kindnet-b76pd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m32s
	  kube-system                 kube-apiserver-ha-174833-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-controller-manager-ha-174833-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-proxy-g7l7z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-scheduler-ha-174833-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-vip-ha-174833-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node ha-174833-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x7 over 4m32s)  kubelet          Node ha-174833-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m30s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-174833-m03 event: Registered Node ha-174833-m03 in Controller
	
	
	Name:               ha-174833-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174833-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=ha-174833
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_30T18_43_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:43:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174833-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:43:45 +0000   Wed, 30 Oct 2024 18:43:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-174833-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 65b27c1ce02d45b78ed3fcddd1aae236
	  System UUID:                65b27c1c-e02d-45b7-8ed3-fcddd1aae236
	  Boot ID:                    25699951-947c-4e74-aa23-b7f7f9d75023
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2dhq5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m21s
	  kube-system                 kube-proxy-nzl42    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m21s                  cidrAllocator    Node ha-174833-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet          Node ha-174833-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet          Node ha-174833-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m20s                  node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-174833-m04 event: Registered Node ha-174833-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-174833-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct30 18:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050141] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040202] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.508080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580074] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.619811] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059036] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050086] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.189200] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.106863] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.256172] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.944359] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.089078] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.056939] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.232740] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.917340] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +5.757118] kauditd_printk_skb: 23 callbacks suppressed
	[Oct30 18:40] kauditd_printk_skb: 32 callbacks suppressed
	[ +47.325044] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c] <==
	{"level":"warn","ts":"2024-10-30T18:46:35.459343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.465828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.469285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.472232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.482831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.489457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.495896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.500516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.503786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.509771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.516067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.521985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.525428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.527069Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.528495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.536032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.546850Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.555036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.558543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.562271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.565934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.571754Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.572058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.578622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-30T18:46:35.615657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2398e045949c73cb","from":"2398e045949c73cb","remote-peer-id":"e95b9b8b1a72dec4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:46:35 up 7 min,  0 users,  load average: 0.18, 0.34, 0.20
	Linux ha-174833 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef] <==
	I1030 18:46:04.314019       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:14.313413       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:14.313476       1 main.go:301] handling current node
	I1030 18:46:14.313504       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:14.313513       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:14.313806       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:14.313832       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:14.314013       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:14.314036       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:24.319131       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:24.319165       1 main.go:301] handling current node
	I1030 18:46:24.319180       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:24.319184       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:24.319509       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:24.319534       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:24.319684       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:24.319708       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:34.321430       1 main.go:297] Handling node with IPs: map[192.168.39.67:{}]
	I1030 18:46:34.321606       1 main.go:324] Node ha-174833-m02 has CIDR [10.244.1.0/24] 
	I1030 18:46:34.321933       1 main.go:297] Handling node with IPs: map[192.168.39.238:{}]
	I1030 18:46:34.321984       1 main.go:324] Node ha-174833-m03 has CIDR [10.244.2.0/24] 
	I1030 18:46:34.322271       1 main.go:297] Handling node with IPs: map[192.168.39.123:{}]
	I1030 18:46:34.322312       1 main.go:324] Node ha-174833-m04 has CIDR [10.244.3.0/24] 
	I1030 18:46:34.322472       1 main.go:297] Handling node with IPs: map[192.168.39.141:{}]
	I1030 18:46:34.322496       1 main.go:301] handling current node
	
	
	==> kube-apiserver [661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb] <==
	I1030 18:39:50.264612       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 18:39:50.401162       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1030 18:39:50.407669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.141]
	I1030 18:39:50.408487       1 controller.go:615] quota admission added evaluator for: endpoints
	I1030 18:39:50.417171       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 18:39:50.434785       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1030 18:39:51.992504       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1030 18:39:52.038007       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1030 18:39:52.050097       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1030 18:39:55.887886       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1030 18:39:56.039666       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1030 18:42:42.298130       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41446: use of closed network connection
	E1030 18:42:42.500141       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41460: use of closed network connection
	E1030 18:42:42.681190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41478: use of closed network connection
	E1030 18:42:42.876163       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41496: use of closed network connection
	E1030 18:42:43.053880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41524: use of closed network connection
	E1030 18:42:43.422726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41570: use of closed network connection
	E1030 18:42:43.605703       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41578: use of closed network connection
	E1030 18:42:43.785641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41594: use of closed network connection
	E1030 18:42:44.079143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41622: use of closed network connection
	E1030 18:42:44.278108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41630: use of closed network connection
	E1030 18:42:44.464009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41654: use of closed network connection
	E1030 18:42:44.647039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41670: use of closed network connection
	E1030 18:42:44.825565       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41686: use of closed network connection
	E1030 18:42:45.007583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41704: use of closed network connection
	
	
	==> kube-controller-manager [db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73] <==
	I1030 18:43:14.768963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:14.886660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.225099       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174833-m04"
	I1030 18:43:15.270413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:15.350905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.242429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.306242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.754966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:17.845608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:24.906507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.742819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:35.743714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:43:35.758129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:37.268796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:43:45.220918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m04"
	I1030 18:44:30.252088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.252535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174833-m04"
	I1030 18:44:30.280327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:30.294546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.947854ms"
	I1030 18:44:30.294861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.928µs"
	I1030 18:44:30.441730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.437828ms"
	I1030 18:44:30.442971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="183.461µs"
	I1030 18:44:32.399995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:44:35.500584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833-m02"
	I1030 18:45:28.632096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-174833"
	
	
	==> kube-proxy [634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 18:39:57.657528       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 18:39:57.672099       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1030 18:39:57.672270       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 18:39:57.707431       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 18:39:57.707476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 18:39:57.707498       1 server_linux.go:169] "Using iptables Proxier"
	I1030 18:39:57.710062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 18:39:57.710384       1 server.go:483] "Version info" version="v1.31.2"
	I1030 18:39:57.710412       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 18:39:57.711719       1 config.go:199] "Starting service config controller"
	I1030 18:39:57.711756       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 18:39:57.711783       1 config.go:105] "Starting endpoint slice config controller"
	I1030 18:39:57.711787       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 18:39:57.712612       1 config.go:328] "Starting node config controller"
	I1030 18:39:57.712701       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 18:39:57.812186       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 18:39:57.812427       1 shared_informer.go:320] Caches are synced for service config
	I1030 18:39:57.813054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6] <==
	W1030 18:39:49.816172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 18:39:49.816268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.949917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 18:39:49.949971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 18:39:49.991072       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 18:39:49.991150       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1030 18:39:52.691806       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1030 18:42:33.022088       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mm586" node="ha-174833-m03"
	E1030 18:42:33.022366       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mm586\": pod busybox-7dff88458-mm586 is already assigned to node \"ha-174833-m02\"" pod="default/busybox-7dff88458-mm586"
	E1030 18:43:14.801891       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.807808       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3291acf1-7798-4998-95fd-5094835e017f(kube-system/kube-proxy-nzl42) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nzl42"
	E1030 18:43:14.807930       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nzl42\": pod kube-proxy-nzl42 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-nzl42"
	I1030 18:43:14.809848       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nzl42" node="ha-174833-m04"
	E1030 18:43:14.810858       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.814494       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3144d47c-0cef-414b-b657-6a3c10ada751(kube-system/kindnet-ptwbp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ptwbp"
	E1030 18:43:14.814760       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ptwbp\": pod kindnet-ptwbp is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-ptwbp"
	I1030 18:43:14.814869       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ptwbp" node="ha-174833-m04"
	E1030 18:43:14.859158       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.859832       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51293c2a-e424-4d2b-a692-1d8df3e4eb88(kube-system/kube-proxy-vp4bf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vp4bf"
	E1030 18:43:14.860153       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vp4bf\": pod kube-proxy-vp4bf is already assigned to node \"ha-174833-m04\"" pod="kube-system/kube-proxy-vp4bf"
	I1030 18:43:14.860458       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vp4bf" node="ha-174833-m04"
	E1030 18:43:14.864834       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	E1030 18:43:14.866342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3cf9c20d-84c1-4bd6-8f34-453bee8cc673(kube-system/kindnet-dsxh6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dsxh6"
	E1030 18:43:14.866529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dsxh6\": pod kindnet-dsxh6 is already assigned to node \"ha-174833-m04\"" pod="kube-system/kindnet-dsxh6"
	I1030 18:43:14.866552       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dsxh6" node="ha-174833-m04"
	
	
	==> kubelet <==
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047183    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:02 ha-174833 kubelet[1302]: E1030 18:45:02.047499    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313902046655317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.048946    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:12 ha-174833 kubelet[1302]: E1030 18:45:12.049303    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313912048650625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050794    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:22 ha-174833 kubelet[1302]: E1030 18:45:22.050834    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313922050546484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053552    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:32 ha-174833 kubelet[1302]: E1030 18:45:32.053658    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313932053080303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.055784    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:42 ha-174833 kubelet[1302]: E1030 18:45:42.056077    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313942055446197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:51 ha-174833 kubelet[1302]: E1030 18:45:51.922951    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 18:45:51 ha-174833 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 18:45:51 ha-174833 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058449    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:45:52 ha-174833 kubelet[1302]: E1030 18:45:52.058518    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313952057983888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060855    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:02 ha-174833 kubelet[1302]: E1030 18:46:02.060895    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313962060627661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062294    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:12 ha-174833 kubelet[1302]: E1030 18:46:12.062632    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313972061933470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:22 ha-174833 kubelet[1302]: E1030 18:46:22.064946    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982064558351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:22 ha-174833 kubelet[1302]: E1030 18:46:22.064979    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313982064558351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:32 ha-174833 kubelet[1302]: E1030 18:46:32.066359    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313992065941712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 18:46:32 ha-174833 kubelet[1302]: E1030 18:46:32.066708    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730313992065941712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147204,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174833 -n ha-174833
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174833 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (277.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174833 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-174833 -v=7 --alsologtostderr
E1030 18:48:17.243095  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-174833 -v=7 --alsologtostderr: exit status 82 (2m1.892039699s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174833-m04"  ...
	* Stopping node "ha-174833-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 18:46:36.663674  405318 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:46:36.663911  405318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:46:36.663920  405318 out.go:358] Setting ErrFile to fd 2...
	I1030 18:46:36.663924  405318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:46:36.664132  405318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:46:36.664344  405318 out.go:352] Setting JSON to false
	I1030 18:46:36.664425  405318 mustload.go:65] Loading cluster: ha-174833
	I1030 18:46:36.664779  405318 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:46:36.664873  405318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:46:36.665044  405318 mustload.go:65] Loading cluster: ha-174833
	I1030 18:46:36.665212  405318 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:46:36.665250  405318 stop.go:39] StopHost: ha-174833-m04
	I1030 18:46:36.665621  405318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:46:36.665670  405318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:46:36.680832  405318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I1030 18:46:36.681303  405318 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:46:36.681882  405318 main.go:141] libmachine: Using API Version  1
	I1030 18:46:36.681902  405318 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:46:36.682264  405318 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:46:36.684599  405318 out.go:177] * Stopping node "ha-174833-m04"  ...
	I1030 18:46:36.685696  405318 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 18:46:36.685722  405318 main.go:141] libmachine: (ha-174833-m04) Calling .DriverName
	I1030 18:46:36.685941  405318 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 18:46:36.685969  405318 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHHostname
	I1030 18:46:36.688859  405318 main.go:141] libmachine: (ha-174833-m04) DBG | domain ha-174833-m04 has defined MAC address 52:54:00:14:44:9f in network mk-ha-174833
	I1030 18:46:36.689239  405318 main.go:141] libmachine: (ha-174833-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:44:9f", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:43:00 +0000 UTC Type:0 Mac:52:54:00:14:44:9f Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-174833-m04 Clientid:01:52:54:00:14:44:9f}
	I1030 18:46:36.689268  405318 main.go:141] libmachine: (ha-174833-m04) DBG | domain ha-174833-m04 has defined IP address 192.168.39.123 and MAC address 52:54:00:14:44:9f in network mk-ha-174833
	I1030 18:46:36.689396  405318 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHPort
	I1030 18:46:36.689555  405318 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHKeyPath
	I1030 18:46:36.689724  405318 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHUsername
	I1030 18:46:36.689902  405318 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m04/id_rsa Username:docker}
	I1030 18:46:36.776967  405318 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 18:46:36.830106  405318 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 18:46:36.884266  405318 main.go:141] libmachine: Stopping "ha-174833-m04"...
	I1030 18:46:36.884295  405318 main.go:141] libmachine: (ha-174833-m04) Calling .GetState
	I1030 18:46:36.885826  405318 main.go:141] libmachine: (ha-174833-m04) Calling .Stop
	I1030 18:46:36.889220  405318 main.go:141] libmachine: (ha-174833-m04) Waiting for machine to stop 0/120
	I1030 18:46:38.071877  405318 main.go:141] libmachine: (ha-174833-m04) Calling .GetState
	I1030 18:46:38.073289  405318 main.go:141] libmachine: Machine "ha-174833-m04" was stopped.
	I1030 18:46:38.073308  405318 stop.go:75] duration metric: took 1.387612923s to stop
	I1030 18:46:38.073348  405318 stop.go:39] StopHost: ha-174833-m03
	I1030 18:46:38.073638  405318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:46:38.073678  405318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:46:38.088798  405318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I1030 18:46:38.089259  405318 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:46:38.089885  405318 main.go:141] libmachine: Using API Version  1
	I1030 18:46:38.089915  405318 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:46:38.090245  405318 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:46:38.092390  405318 out.go:177] * Stopping node "ha-174833-m03"  ...
	I1030 18:46:38.093637  405318 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 18:46:38.093660  405318 main.go:141] libmachine: (ha-174833-m03) Calling .DriverName
	I1030 18:46:38.093867  405318 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 18:46:38.093901  405318 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHHostname
	I1030 18:46:38.096991  405318 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:46:38.097378  405318 main.go:141] libmachine: (ha-174833-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:9d:ad", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:41:27 +0000 UTC Type:0 Mac:52:54:00:76:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-174833-m03 Clientid:01:52:54:00:76:9d:ad}
	I1030 18:46:38.097405  405318 main.go:141] libmachine: (ha-174833-m03) DBG | domain ha-174833-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:76:9d:ad in network mk-ha-174833
	I1030 18:46:38.097565  405318 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHPort
	I1030 18:46:38.097750  405318 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHKeyPath
	I1030 18:46:38.097940  405318 main.go:141] libmachine: (ha-174833-m03) Calling .GetSSHUsername
	I1030 18:46:38.098083  405318 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m03/id_rsa Username:docker}
	I1030 18:46:38.195660  405318 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 18:46:38.249750  405318 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 18:46:38.303942  405318 main.go:141] libmachine: Stopping "ha-174833-m03"...
	I1030 18:46:38.303970  405318 main.go:141] libmachine: (ha-174833-m03) Calling .GetState
	I1030 18:46:38.305693  405318 main.go:141] libmachine: (ha-174833-m03) Calling .Stop
	I1030 18:46:38.309255  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 0/120
	I1030 18:46:39.310929  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 1/120
	I1030 18:46:40.312308  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 2/120
	I1030 18:46:41.313630  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 3/120
	I1030 18:46:42.315068  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 4/120
	I1030 18:46:43.317173  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 5/120
	I1030 18:46:44.318763  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 6/120
	I1030 18:46:45.320188  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 7/120
	I1030 18:46:46.321840  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 8/120
	I1030 18:46:47.323301  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 9/120
	I1030 18:46:48.325412  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 10/120
	I1030 18:46:49.326846  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 11/120
	I1030 18:46:50.329305  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 12/120
	I1030 18:46:51.330802  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 13/120
	I1030 18:46:52.332454  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 14/120
	I1030 18:46:53.334649  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 15/120
	I1030 18:46:54.336190  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 16/120
	I1030 18:46:55.337656  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 17/120
	I1030 18:46:56.339160  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 18/120
	I1030 18:46:57.341094  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 19/120
	I1030 18:46:58.342682  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 20/120
	I1030 18:46:59.344512  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 21/120
	I1030 18:47:00.346459  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 22/120
	I1030 18:47:01.348088  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 23/120
	I1030 18:47:02.349859  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 24/120
	I1030 18:47:03.352326  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 25/120
	I1030 18:47:04.353745  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 26/120
	I1030 18:47:05.355532  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 27/120
	I1030 18:47:06.357081  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 28/120
	I1030 18:47:07.358878  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 29/120
	I1030 18:47:08.360682  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 30/120
	I1030 18:47:09.362194  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 31/120
	I1030 18:47:10.363679  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 32/120
	I1030 18:47:11.365116  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 33/120
	I1030 18:47:12.366538  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 34/120
	I1030 18:47:13.368621  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 35/120
	I1030 18:47:14.370012  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 36/120
	I1030 18:47:15.371464  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 37/120
	I1030 18:47:16.373065  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 38/120
	I1030 18:47:17.374270  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 39/120
	I1030 18:47:18.376108  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 40/120
	I1030 18:47:19.377436  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 41/120
	I1030 18:47:20.378853  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 42/120
	I1030 18:47:21.380244  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 43/120
	I1030 18:47:22.381966  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 44/120
	I1030 18:47:23.383702  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 45/120
	I1030 18:47:24.385316  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 46/120
	I1030 18:47:25.386654  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 47/120
	I1030 18:47:26.389059  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 48/120
	I1030 18:47:27.390392  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 49/120
	I1030 18:47:28.392256  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 50/120
	I1030 18:47:29.393653  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 51/120
	I1030 18:47:30.395026  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 52/120
	I1030 18:47:31.396865  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 53/120
	I1030 18:47:32.398178  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 54/120
	I1030 18:47:33.399878  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 55/120
	I1030 18:47:34.401373  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 56/120
	I1030 18:47:35.402647  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 57/120
	I1030 18:47:36.404133  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 58/120
	I1030 18:47:37.406078  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 59/120
	I1030 18:47:38.407794  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 60/120
	I1030 18:47:39.409113  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 61/120
	I1030 18:47:40.410696  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 62/120
	I1030 18:47:41.412060  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 63/120
	I1030 18:47:42.413503  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 64/120
	I1030 18:47:43.415394  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 65/120
	I1030 18:47:44.416660  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 66/120
	I1030 18:47:45.418023  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 67/120
	I1030 18:47:46.419390  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 68/120
	I1030 18:47:47.421087  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 69/120
	I1030 18:47:48.423302  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 70/120
	I1030 18:47:49.424583  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 71/120
	I1030 18:47:50.426044  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 72/120
	I1030 18:47:51.427614  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 73/120
	I1030 18:47:52.428975  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 74/120
	I1030 18:47:53.430665  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 75/120
	I1030 18:47:54.432072  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 76/120
	I1030 18:47:55.433442  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 77/120
	I1030 18:47:56.434796  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 78/120
	I1030 18:47:57.436058  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 79/120
	I1030 18:47:58.438055  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 80/120
	I1030 18:47:59.439435  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 81/120
	I1030 18:48:00.440620  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 82/120
	I1030 18:48:01.442024  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 83/120
	I1030 18:48:02.443285  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 84/120
	I1030 18:48:03.444470  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 85/120
	I1030 18:48:04.446325  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 86/120
	I1030 18:48:05.448291  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 87/120
	I1030 18:48:06.449638  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 88/120
	I1030 18:48:07.451134  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 89/120
	I1030 18:48:08.453131  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 90/120
	I1030 18:48:09.454477  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 91/120
	I1030 18:48:10.456003  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 92/120
	I1030 18:48:11.457363  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 93/120
	I1030 18:48:12.458819  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 94/120
	I1030 18:48:13.460357  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 95/120
	I1030 18:48:14.461817  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 96/120
	I1030 18:48:15.463455  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 97/120
	I1030 18:48:16.464984  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 98/120
	I1030 18:48:17.466393  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 99/120
	I1030 18:48:18.468598  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 100/120
	I1030 18:48:19.470273  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 101/120
	I1030 18:48:20.471657  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 102/120
	I1030 18:48:21.473091  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 103/120
	I1030 18:48:22.474588  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 104/120
	I1030 18:48:23.477135  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 105/120
	I1030 18:48:24.478581  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 106/120
	I1030 18:48:25.479929  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 107/120
	I1030 18:48:26.481616  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 108/120
	I1030 18:48:27.483171  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 109/120
	I1030 18:48:28.485004  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 110/120
	I1030 18:48:29.486419  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 111/120
	I1030 18:48:30.487943  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 112/120
	I1030 18:48:31.489356  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 113/120
	I1030 18:48:32.490880  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 114/120
	I1030 18:48:33.492625  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 115/120
	I1030 18:48:34.494135  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 116/120
	I1030 18:48:35.495432  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 117/120
	I1030 18:48:36.496892  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 118/120
	I1030 18:48:37.498203  405318 main.go:141] libmachine: (ha-174833-m03) Waiting for machine to stop 119/120
	I1030 18:48:38.499516  405318 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1030 18:48:38.499581  405318 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1030 18:48:38.501881  405318 out.go:201] 
	W1030 18:48:38.503298  405318 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1030 18:48:38.503319  405318 out.go:270] * 
	* 
	W1030 18:48:38.506501  405318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 18:48:38.507696  405318 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-174833 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174833 --wait=true -v=7 --alsologtostderr
E1030 18:48:44.947045  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:50:18.708708  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-174833 --wait=true -v=7 --alsologtostderr: (2m32.389057514s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174833
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174833 -n ha-174833
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 logs -n 25: (2.38033037s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m04 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp testdata/cp-test.txt                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m04_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03:/home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m03 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174833 node stop m02 -v=7                                                     | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-174833 node start m02 -v=7                                                    | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174833 -v=7                                                           | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-174833 -v=7                                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-174833 --wait=true -v=7                                                    | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:48 UTC | 30 Oct 24 18:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174833                                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:51 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:48:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:48:38.564135  405809 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:48:38.564325  405809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:48:38.564336  405809 out.go:358] Setting ErrFile to fd 2...
	I1030 18:48:38.564343  405809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:48:38.564547  405809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:48:38.565143  405809 out.go:352] Setting JSON to false
	I1030 18:48:38.566160  405809 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9062,"bootTime":1730305057,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:48:38.566224  405809 start.go:139] virtualization: kvm guest
	I1030 18:48:38.568588  405809 out.go:177] * [ha-174833] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:48:38.570197  405809 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:48:38.570271  405809 notify.go:220] Checking for updates...
	I1030 18:48:38.573244  405809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:48:38.574708  405809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:48:38.576261  405809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:48:38.577906  405809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:48:38.579147  405809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:48:38.580782  405809 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:48:38.580885  405809 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:48:38.581360  405809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:48:38.581401  405809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:48:38.596964  405809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I1030 18:48:38.597464  405809 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:48:38.598028  405809 main.go:141] libmachine: Using API Version  1
	I1030 18:48:38.598050  405809 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:48:38.598394  405809 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:48:38.598594  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:38.633062  405809 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 18:48:38.634503  405809 start.go:297] selected driver: kvm2
	I1030 18:48:38.634523  405809 start.go:901] validating driver "kvm2" against &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:48:38.634699  405809 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:48:38.635081  405809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:48:38.635178  405809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:48:38.649994  405809 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:48:38.650696  405809 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:48:38.650734  405809 cni.go:84] Creating CNI manager for ""
	I1030 18:48:38.650801  405809 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1030 18:48:38.650855  405809 start.go:340] cluster config:
	{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:48:38.650997  405809 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:48:38.653748  405809 out.go:177] * Starting "ha-174833" primary control-plane node in "ha-174833" cluster
	I1030 18:48:38.655251  405809 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:48:38.655299  405809 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:48:38.655311  405809 cache.go:56] Caching tarball of preloaded images
	I1030 18:48:38.655405  405809 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:48:38.655419  405809 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:48:38.655575  405809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:48:38.655799  405809 start.go:360] acquireMachinesLock for ha-174833: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:48:38.655845  405809 start.go:364] duration metric: took 26.841µs to acquireMachinesLock for "ha-174833"
	I1030 18:48:38.655866  405809 start.go:96] Skipping create...Using existing machine configuration
	I1030 18:48:38.655876  405809 fix.go:54] fixHost starting: 
	I1030 18:48:38.656176  405809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:48:38.656216  405809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:48:38.670439  405809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I1030 18:48:38.670989  405809 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:48:38.671613  405809 main.go:141] libmachine: Using API Version  1
	I1030 18:48:38.671638  405809 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:48:38.671955  405809 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:48:38.672139  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:38.672264  405809 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:48:38.673781  405809 fix.go:112] recreateIfNeeded on ha-174833: state=Running err=<nil>
	W1030 18:48:38.673815  405809 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 18:48:38.676700  405809 out.go:177] * Updating the running kvm2 "ha-174833" VM ...
	I1030 18:48:38.678040  405809 machine.go:93] provisionDockerMachine start ...
	I1030 18:48:38.678058  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:38.678270  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:38.680670  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.681065  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:38.681095  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.681219  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:38.681389  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.681520  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.681656  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:38.681782  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:38.681964  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:38.681975  405809 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 18:48:38.791998  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:48:38.792036  405809 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:48:38.792311  405809 buildroot.go:166] provisioning hostname "ha-174833"
	I1030 18:48:38.792344  405809 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:48:38.792512  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:38.795426  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.795880  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:38.795910  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.796047  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:38.796264  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.796436  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.796620  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:38.796798  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:38.797004  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:38.797031  405809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833 && echo "ha-174833" | sudo tee /etc/hostname
	I1030 18:48:38.915388  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:48:38.915420  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:38.918460  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.918911  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:38.918948  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.919186  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:38.919442  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.919629  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.919799  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:38.919961  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:38.920188  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:38.920205  405809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:48:39.027855  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:48:39.027887  405809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:48:39.027912  405809 buildroot.go:174] setting up certificates
	I1030 18:48:39.027925  405809 provision.go:84] configureAuth start
	I1030 18:48:39.027936  405809 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:48:39.028205  405809 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:48:39.031149  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.031560  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.031585  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.031666  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:39.033957  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.034283  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.034309  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.034453  405809 provision.go:143] copyHostCerts
	I1030 18:48:39.034502  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:48:39.034575  405809 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:48:39.034588  405809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:48:39.034673  405809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:48:39.034800  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:48:39.034825  405809 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:48:39.034833  405809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:48:39.034870  405809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:48:39.035013  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:48:39.035047  405809 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:48:39.035064  405809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:48:39.035103  405809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:48:39.035190  405809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833 san=[127.0.0.1 192.168.39.141 ha-174833 localhost minikube]
	I1030 18:48:39.157422  405809 provision.go:177] copyRemoteCerts
	I1030 18:48:39.157485  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:48:39.157509  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:39.160296  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.160721  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.160742  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.160924  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:39.161092  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:39.161244  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:39.161330  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:39.240866  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:48:39.240935  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:48:39.267480  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:48:39.267555  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1030 18:48:39.296126  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:48:39.296198  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:48:39.323229  405809 provision.go:87] duration metric: took 295.289532ms to configureAuth
	I1030 18:48:39.323262  405809 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:48:39.323522  405809 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:48:39.323616  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:39.326586  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.327021  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.327048  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.327264  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:39.327450  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:39.327634  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:39.327766  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:39.327926  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:39.328096  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:39.328115  405809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:48:44.937692  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:48:44.937719  405809 machine.go:96] duration metric: took 6.25966635s to provisionDockerMachine
	I1030 18:48:44.937735  405809 start.go:293] postStartSetup for "ha-174833" (driver="kvm2")
	I1030 18:48:44.937746  405809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:48:44.937767  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:44.938146  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:48:44.938175  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:44.940752  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:44.941022  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:44.941065  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:44.941197  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:44.941411  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:44.941574  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:44.941724  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:45.021249  405809 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:48:45.025660  405809 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:48:45.025691  405809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:48:45.025757  405809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:48:45.025840  405809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:48:45.025853  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:48:45.025938  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:48:45.035109  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:48:45.058527  405809 start.go:296] duration metric: took 120.777886ms for postStartSetup
	I1030 18:48:45.058571  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.058867  405809 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1030 18:48:45.058896  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.061253  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.061585  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.061618  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.061790  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.061972  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.062117  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.062289  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	W1030 18:48:45.140747  405809 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1030 18:48:45.140776  405809 fix.go:56] duration metric: took 6.484902063s for fixHost
	I1030 18:48:45.140807  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.143222  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.143617  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.143639  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.143807  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.144005  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.144177  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.144335  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.144503  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:45.144669  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:45.144679  405809 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:48:45.247288  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730314125.204909816
	
	I1030 18:48:45.247317  405809 fix.go:216] guest clock: 1730314125.204909816
	I1030 18:48:45.247324  405809 fix.go:229] Guest: 2024-10-30 18:48:45.204909816 +0000 UTC Remote: 2024-10-30 18:48:45.140790956 +0000 UTC m=+6.619784060 (delta=64.11886ms)
	I1030 18:48:45.247348  405809 fix.go:200] guest clock delta is within tolerance: 64.11886ms
	I1030 18:48:45.247353  405809 start.go:83] releasing machines lock for "ha-174833", held for 6.591496411s
	I1030 18:48:45.247372  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.247676  405809 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:48:45.250380  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.250735  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.250769  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.250890  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.251548  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.251724  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.251830  405809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:48:45.251868  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.251988  405809 ssh_runner.go:195] Run: cat /version.json
	I1030 18:48:45.252023  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.254186  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.254555  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.254578  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.254597  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.254775  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.254978  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.255000  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.255009  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.255130  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.255205  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.255294  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.255360  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:45.255418  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.255541  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:45.352407  405809 ssh_runner.go:195] Run: systemctl --version
	I1030 18:48:45.358445  405809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:48:45.514593  405809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:48:45.522449  405809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:48:45.522542  405809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:48:45.531569  405809 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 18:48:45.531589  405809 start.go:495] detecting cgroup driver to use...
	I1030 18:48:45.531643  405809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:48:45.547166  405809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:48:45.561360  405809 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:48:45.561420  405809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:48:45.574443  405809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:48:45.587594  405809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:48:45.725074  405809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:48:45.863662  405809 docker.go:233] disabling docker service ...
	I1030 18:48:45.863747  405809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:48:45.879212  405809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:48:45.893161  405809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:48:46.035262  405809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:48:46.172338  405809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:48:46.185584  405809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:48:46.204870  405809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:48:46.204946  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.214939  405809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:48:46.215007  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.224869  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.234575  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.244189  405809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:48:46.254255  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.264010  405809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.275071  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.284990  405809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:48:46.294149  405809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:48:46.303524  405809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:48:46.439828  405809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:48:52.179761  405809 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.739889395s)
	I1030 18:48:52.179805  405809 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:48:52.179870  405809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:48:52.185302  405809 start.go:563] Will wait 60s for crictl version
	I1030 18:48:52.185355  405809 ssh_runner.go:195] Run: which crictl
	I1030 18:48:52.190886  405809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:48:52.225783  405809 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:48:52.225874  405809 ssh_runner.go:195] Run: crio --version
	I1030 18:48:52.255493  405809 ssh_runner.go:195] Run: crio --version
	I1030 18:48:52.286030  405809 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:48:52.287571  405809 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:48:52.290148  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:52.290585  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:52.290606  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:52.290812  405809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:48:52.295713  405809 kubeadm.go:883] updating cluster {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:48:52.295891  405809 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:48:52.295950  405809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:48:52.341835  405809 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:48:52.341861  405809 crio.go:433] Images already preloaded, skipping extraction
	I1030 18:48:52.341913  405809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:48:52.377228  405809 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:48:52.377255  405809 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:48:52.377270  405809 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.2 crio true true} ...
	I1030 18:48:52.377398  405809 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:48:52.377496  405809 ssh_runner.go:195] Run: crio config
	I1030 18:48:52.427574  405809 cni.go:84] Creating CNI manager for ""
	I1030 18:48:52.427607  405809 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1030 18:48:52.427621  405809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:48:52.427664  405809 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174833 NodeName:ha-174833 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:48:52.427829  405809 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174833"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:48:52.427857  405809 kube-vip.go:115] generating kube-vip config ...
	I1030 18:48:52.427913  405809 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:48:52.440406  405809 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:48:52.440524  405809 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:48:52.440582  405809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:48:52.450600  405809 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:48:52.450666  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1030 18:48:52.460260  405809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1030 18:48:52.476571  405809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:48:52.492782  405809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1030 18:48:52.508573  405809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:48:52.528895  405809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:48:52.532947  405809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:48:52.671968  405809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:48:52.687009  405809 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.141
	I1030 18:48:52.687034  405809 certs.go:194] generating shared ca certs ...
	I1030 18:48:52.687052  405809 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:48:52.687242  405809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:48:52.687299  405809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:48:52.687313  405809 certs.go:256] generating profile certs ...
	I1030 18:48:52.687417  405809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:48:52.687450  405809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547
	I1030 18:48:52.687472  405809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.238 192.168.39.254]
	I1030 18:48:52.838941  405809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547 ...
	I1030 18:48:52.838980  405809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547: {Name:mk5856b10a29cc4bdc3c17d5e90cfc2c8c466cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:48:52.839188  405809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547 ...
	I1030 18:48:52.839205  405809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547: {Name:mkc99e20ca22843d24c345fcec0771c78bd2ed96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:48:52.839304  405809 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:48:52.839518  405809 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:48:52.839704  405809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:48:52.839725  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:48:52.839742  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:48:52.839761  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:48:52.839779  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:48:52.839800  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:48:52.839826  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:48:52.839840  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:48:52.839854  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:48:52.839921  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:48:52.839955  405809 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:48:52.839965  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:48:52.839991  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:48:52.840014  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:48:52.840038  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:48:52.840080  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:48:52.840109  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:48:52.840123  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:52.840135  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:48:52.840837  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:48:52.866241  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:48:52.889998  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:48:52.914242  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:48:52.937565  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 18:48:52.961504  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 18:48:52.985113  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:48:53.008692  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:48:53.032147  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:48:53.055678  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:48:53.078148  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:48:53.101424  405809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:48:53.117631  405809 ssh_runner.go:195] Run: openssl version
	I1030 18:48:53.123504  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:48:53.134261  405809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:48:53.138656  405809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:48:53.138712  405809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:48:53.144313  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:48:53.153623  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:48:53.165094  405809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:53.169642  405809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:53.169695  405809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:53.175484  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:48:53.185258  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:48:53.196105  405809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:48:53.200467  405809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:48:53.200505  405809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:48:53.205961  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:48:53.215513  405809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:48:53.219863  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 18:48:53.225424  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 18:48:53.230960  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 18:48:53.236446  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 18:48:53.242187  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 18:48:53.247733  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 18:48:53.253171  405809 kubeadm.go:392] StartCluster: {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:48:53.253353  405809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:48:53.253421  405809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:48:53.294279  405809 cri.go:89] found id: "7849ab28d6c691374252169550d3d021bd1631d65174aa6a807f5ebc3396154c"
	I1030 18:48:53.294318  405809 cri.go:89] found id: "e3c793bf4653d359eff5aba90cb22c8adf85630cb953511e47067946527a1eac"
	I1030 18:48:53.294325  405809 cri.go:89] found id: "a468c79700aa34918090d87cf32ed72f1d49f5b75dae53935cb3982ce827f5d5"
	I1030 18:48:53.294329  405809 cri.go:89] found id: "07374565cf0faf4679e84e01467f01d341a24035c230d69813103d9a9d744ec5"
	I1030 18:48:53.294334  405809 cri.go:89] found id: "b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009"
	I1030 18:48:53.294338  405809 cri.go:89] found id: "80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f"
	I1030 18:48:53.294343  405809 cri.go:89] found id: "46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef"
	I1030 18:48:53.294353  405809 cri.go:89] found id: "634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740"
	I1030 18:48:53.294360  405809 cri.go:89] found id: "da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a"
	I1030 18:48:53.294373  405809 cri.go:89] found id: "6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6"
	I1030 18:48:53.294380  405809 cri.go:89] found id: "db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73"
	I1030 18:48:53.294385  405809 cri.go:89] found id: "381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c"
	I1030 18:48:53.294391  405809 cri.go:89] found id: "661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb"
	I1030 18:48:53.294397  405809 cri.go:89] found id: ""
	I1030 18:48:53.294452  405809 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174833 -n ha-174833
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174833 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (277.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (158.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 stop -v=7 --alsologtostderr
E1030 18:51:41.780544  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:53:17.243091  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174833 stop -v=7 --alsologtostderr: exit status 82 (2m1.896712898s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174833-m04"  ...
	* Stopping node "ha-174833-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 18:51:31.335310  407250 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:51:31.335434  407250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:51:31.335445  407250 out.go:358] Setting ErrFile to fd 2...
	I1030 18:51:31.335449  407250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:51:31.335636  407250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:51:31.335933  407250 out.go:352] Setting JSON to false
	I1030 18:51:31.336040  407250 mustload.go:65] Loading cluster: ha-174833
	I1030 18:51:31.336488  407250 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:51:31.336590  407250 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:51:31.336792  407250 mustload.go:65] Loading cluster: ha-174833
	I1030 18:51:31.336971  407250 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:51:31.337014  407250 stop.go:39] StopHost: ha-174833-m04
	I1030 18:51:31.337629  407250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:51:31.337687  407250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:51:31.352387  407250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I1030 18:51:31.352952  407250 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:51:31.353571  407250 main.go:141] libmachine: Using API Version  1
	I1030 18:51:31.353596  407250 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:51:31.354006  407250 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:51:31.357556  407250 out.go:177] * Stopping node "ha-174833-m04"  ...
	I1030 18:51:31.358998  407250 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 18:51:31.359023  407250 main.go:141] libmachine: (ha-174833-m04) Calling .DriverName
	I1030 18:51:31.359250  407250 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 18:51:31.359283  407250 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHHostname
	I1030 18:51:31.361950  407250 main.go:141] libmachine: (ha-174833-m04) DBG | domain ha-174833-m04 has defined MAC address 52:54:00:14:44:9f in network mk-ha-174833
	I1030 18:51:31.362404  407250 main.go:141] libmachine: (ha-174833-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:44:9f", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:50:59 +0000 UTC Type:0 Mac:52:54:00:14:44:9f Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-174833-m04 Clientid:01:52:54:00:14:44:9f}
	I1030 18:51:31.362436  407250 main.go:141] libmachine: (ha-174833-m04) DBG | domain ha-174833-m04 has defined IP address 192.168.39.123 and MAC address 52:54:00:14:44:9f in network mk-ha-174833
	I1030 18:51:31.362650  407250 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHPort
	I1030 18:51:31.362809  407250 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHKeyPath
	I1030 18:51:31.362965  407250 main.go:141] libmachine: (ha-174833-m04) Calling .GetSSHUsername
	I1030 18:51:31.363106  407250 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m04/id_rsa Username:docker}
	I1030 18:51:31.444406  407250 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 18:51:31.497366  407250 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 18:51:31.549604  407250 main.go:141] libmachine: Stopping "ha-174833-m04"...
	I1030 18:51:31.549640  407250 main.go:141] libmachine: (ha-174833-m04) Calling .GetState
	I1030 18:51:31.551291  407250 main.go:141] libmachine: (ha-174833-m04) Calling .Stop
	I1030 18:51:31.554326  407250 main.go:141] libmachine: (ha-174833-m04) Waiting for machine to stop 0/120
	I1030 18:51:32.757655  407250 main.go:141] libmachine: (ha-174833-m04) Calling .GetState
	I1030 18:51:32.758981  407250 main.go:141] libmachine: Machine "ha-174833-m04" was stopped.
	I1030 18:51:32.759000  407250 stop.go:75] duration metric: took 1.400003221s to stop
	I1030 18:51:32.759023  407250 stop.go:39] StopHost: ha-174833-m02
	I1030 18:51:32.759411  407250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:51:32.759461  407250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:51:32.774453  407250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40589
	I1030 18:51:32.774938  407250 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:51:32.775429  407250 main.go:141] libmachine: Using API Version  1
	I1030 18:51:32.775447  407250 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:51:32.775761  407250 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:51:32.777670  407250 out.go:177] * Stopping node "ha-174833-m02"  ...
	I1030 18:51:32.778829  407250 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 18:51:32.778853  407250 main.go:141] libmachine: (ha-174833-m02) Calling .DriverName
	I1030 18:51:32.779063  407250 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 18:51:32.779089  407250 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHHostname
	I1030 18:51:32.781478  407250 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:51:32.781802  407250 main.go:141] libmachine: (ha-174833-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fa:1a", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:49:04 +0000 UTC Type:0 Mac:52:54:00:87:fa:1a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-174833-m02 Clientid:01:52:54:00:87:fa:1a}
	I1030 18:51:32.781832  407250 main.go:141] libmachine: (ha-174833-m02) DBG | domain ha-174833-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:87:fa:1a in network mk-ha-174833
	I1030 18:51:32.781956  407250 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHPort
	I1030 18:51:32.782113  407250 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHKeyPath
	I1030 18:51:32.782273  407250 main.go:141] libmachine: (ha-174833-m02) Calling .GetSSHUsername
	I1030 18:51:32.782419  407250 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833-m02/id_rsa Username:docker}
	I1030 18:51:32.865778  407250 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 18:51:32.920083  407250 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 18:51:32.973414  407250 main.go:141] libmachine: Stopping "ha-174833-m02"...
	I1030 18:51:32.973439  407250 main.go:141] libmachine: (ha-174833-m02) Calling .GetState
	I1030 18:51:32.975112  407250 main.go:141] libmachine: (ha-174833-m02) Calling .Stop
	I1030 18:51:32.978556  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 0/120
	I1030 18:51:33.979825  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 1/120
	I1030 18:51:34.981138  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 2/120
	I1030 18:51:35.982392  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 3/120
	I1030 18:51:36.983712  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 4/120
	I1030 18:51:37.985941  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 5/120
	I1030 18:51:38.987378  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 6/120
	I1030 18:51:39.988960  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 7/120
	I1030 18:51:40.990449  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 8/120
	I1030 18:51:41.991909  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 9/120
	I1030 18:51:42.993747  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 10/120
	I1030 18:51:43.995321  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 11/120
	I1030 18:51:44.996934  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 12/120
	I1030 18:51:45.998838  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 13/120
	I1030 18:51:47.001047  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 14/120
	I1030 18:51:48.002742  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 15/120
	I1030 18:51:49.004423  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 16/120
	I1030 18:51:50.005933  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 17/120
	I1030 18:51:51.008255  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 18/120
	I1030 18:51:52.009615  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 19/120
	I1030 18:51:53.011361  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 20/120
	I1030 18:51:54.013337  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 21/120
	I1030 18:51:55.014753  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 22/120
	I1030 18:51:56.016336  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 23/120
	I1030 18:51:57.017723  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 24/120
	I1030 18:51:58.019563  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 25/120
	I1030 18:51:59.021402  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 26/120
	I1030 18:52:00.022793  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 27/120
	I1030 18:52:01.024514  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 28/120
	I1030 18:52:02.025935  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 29/120
	I1030 18:52:03.028091  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 30/120
	I1030 18:52:04.029915  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 31/120
	I1030 18:52:05.031405  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 32/120
	I1030 18:52:06.033262  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 33/120
	I1030 18:52:07.035108  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 34/120
	I1030 18:52:08.037093  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 35/120
	I1030 18:52:09.038600  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 36/120
	I1030 18:52:10.040079  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 37/120
	I1030 18:52:11.041434  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 38/120
	I1030 18:52:12.042996  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 39/120
	I1030 18:52:13.044906  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 40/120
	I1030 18:52:14.046169  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 41/120
	I1030 18:52:15.047614  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 42/120
	I1030 18:52:16.048910  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 43/120
	I1030 18:52:17.050400  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 44/120
	I1030 18:52:18.052220  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 45/120
	I1030 18:52:19.053670  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 46/120
	I1030 18:52:20.055141  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 47/120
	I1030 18:52:21.056639  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 48/120
	I1030 18:52:22.058135  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 49/120
	I1030 18:52:23.059949  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 50/120
	I1030 18:52:24.061840  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 51/120
	I1030 18:52:25.063227  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 52/120
	I1030 18:52:26.065183  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 53/120
	I1030 18:52:27.066860  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 54/120
	I1030 18:52:28.068956  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 55/120
	I1030 18:52:29.070310  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 56/120
	I1030 18:52:30.071730  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 57/120
	I1030 18:52:31.073202  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 58/120
	I1030 18:52:32.074851  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 59/120
	I1030 18:52:33.076689  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 60/120
	I1030 18:52:34.077919  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 61/120
	I1030 18:52:35.079415  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 62/120
	I1030 18:52:36.080824  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 63/120
	I1030 18:52:37.082127  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 64/120
	I1030 18:52:38.083602  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 65/120
	I1030 18:52:39.085565  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 66/120
	I1030 18:52:40.087126  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 67/120
	I1030 18:52:41.089102  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 68/120
	I1030 18:52:42.090455  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 69/120
	I1030 18:52:43.092208  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 70/120
	I1030 18:52:44.093524  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 71/120
	I1030 18:52:45.095010  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 72/120
	I1030 18:52:46.096214  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 73/120
	I1030 18:52:47.097542  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 74/120
	I1030 18:52:48.099367  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 75/120
	I1030 18:52:49.100945  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 76/120
	I1030 18:52:50.102376  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 77/120
	I1030 18:52:51.103692  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 78/120
	I1030 18:52:52.104953  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 79/120
	I1030 18:52:53.106773  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 80/120
	I1030 18:52:54.108155  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 81/120
	I1030 18:52:55.109532  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 82/120
	I1030 18:52:56.110839  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 83/120
	I1030 18:52:57.112417  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 84/120
	I1030 18:52:58.114346  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 85/120
	I1030 18:52:59.115822  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 86/120
	I1030 18:53:00.117099  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 87/120
	I1030 18:53:01.118446  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 88/120
	I1030 18:53:02.119917  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 89/120
	I1030 18:53:03.121833  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 90/120
	I1030 18:53:04.123359  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 91/120
	I1030 18:53:05.124966  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 92/120
	I1030 18:53:06.126550  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 93/120
	I1030 18:53:07.127887  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 94/120
	I1030 18:53:08.129575  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 95/120
	I1030 18:53:09.131110  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 96/120
	I1030 18:53:10.132453  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 97/120
	I1030 18:53:11.133898  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 98/120
	I1030 18:53:12.135310  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 99/120
	I1030 18:53:13.136933  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 100/120
	I1030 18:53:14.138421  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 101/120
	I1030 18:53:15.139785  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 102/120
	I1030 18:53:16.142140  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 103/120
	I1030 18:53:17.143504  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 104/120
	I1030 18:53:18.145582  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 105/120
	I1030 18:53:19.147492  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 106/120
	I1030 18:53:20.148982  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 107/120
	I1030 18:53:21.150576  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 108/120
	I1030 18:53:22.152073  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 109/120
	I1030 18:53:23.154019  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 110/120
	I1030 18:53:24.155618  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 111/120
	I1030 18:53:25.157143  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 112/120
	I1030 18:53:26.158567  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 113/120
	I1030 18:53:27.160120  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 114/120
	I1030 18:53:28.161754  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 115/120
	I1030 18:53:29.163582  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 116/120
	I1030 18:53:30.165090  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 117/120
	I1030 18:53:31.166550  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 118/120
	I1030 18:53:32.167917  407250 main.go:141] libmachine: (ha-174833-m02) Waiting for machine to stop 119/120
	I1030 18:53:33.168903  407250 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1030 18:53:33.168979  407250 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1030 18:53:33.171462  407250 out.go:201] 
	W1030 18:53:33.173029  407250 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1030 18:53:33.173052  407250 out.go:270] * 
	* 
	W1030 18:53:33.176818  407250 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 18:53:33.178271  407250 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-174833 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr: (34.178197151s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174833 -n ha-174833
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-174833 -n ha-174833: exit status 2 (236.022381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 logs -n 25: (1.666672382s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m04 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp testdata/cp-test.txt                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833:/home/docker/cp-test_ha-174833-m04_ha-174833.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833 sudo cat                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m02:/home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m02 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m03:/home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n                                                                 | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | ha-174833-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174833 ssh -n ha-174833-m03 sudo cat                                          | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC | 30 Oct 24 18:43 UTC |
	|         | /home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174833 node stop m02 -v=7                                                     | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-174833 node start m02 -v=7                                                    | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174833 -v=7                                                           | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-174833 -v=7                                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-174833 --wait=true -v=7                                                    | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:48 UTC | 30 Oct 24 18:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174833                                                                | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:51 UTC |                     |
	| node    | ha-174833 node delete m03 -v=7                                                   | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:51 UTC | 30 Oct 24 18:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-174833 stop -v=7                                                              | ha-174833 | jenkins | v1.34.0 | 30 Oct 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:48:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:48:38.564135  405809 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:48:38.564325  405809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:48:38.564336  405809 out.go:358] Setting ErrFile to fd 2...
	I1030 18:48:38.564343  405809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:48:38.564547  405809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:48:38.565143  405809 out.go:352] Setting JSON to false
	I1030 18:48:38.566160  405809 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9062,"bootTime":1730305057,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:48:38.566224  405809 start.go:139] virtualization: kvm guest
	I1030 18:48:38.568588  405809 out.go:177] * [ha-174833] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:48:38.570197  405809 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:48:38.570271  405809 notify.go:220] Checking for updates...
	I1030 18:48:38.573244  405809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:48:38.574708  405809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:48:38.576261  405809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:48:38.577906  405809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:48:38.579147  405809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:48:38.580782  405809 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:48:38.580885  405809 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:48:38.581360  405809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:48:38.581401  405809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:48:38.596964  405809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I1030 18:48:38.597464  405809 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:48:38.598028  405809 main.go:141] libmachine: Using API Version  1
	I1030 18:48:38.598050  405809 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:48:38.598394  405809 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:48:38.598594  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:38.633062  405809 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 18:48:38.634503  405809 start.go:297] selected driver: kvm2
	I1030 18:48:38.634523  405809 start.go:901] validating driver "kvm2" against &{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:48:38.634699  405809 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:48:38.635081  405809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:48:38.635178  405809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:48:38.649994  405809 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:48:38.650696  405809 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 18:48:38.650734  405809 cni.go:84] Creating CNI manager for ""
	I1030 18:48:38.650801  405809 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1030 18:48:38.650855  405809 start.go:340] cluster config:
	{Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:48:38.650997  405809 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:48:38.653748  405809 out.go:177] * Starting "ha-174833" primary control-plane node in "ha-174833" cluster
	I1030 18:48:38.655251  405809 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:48:38.655299  405809 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:48:38.655311  405809 cache.go:56] Caching tarball of preloaded images
	I1030 18:48:38.655405  405809 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 18:48:38.655419  405809 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 18:48:38.655575  405809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/config.json ...
	I1030 18:48:38.655799  405809 start.go:360] acquireMachinesLock for ha-174833: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 18:48:38.655845  405809 start.go:364] duration metric: took 26.841µs to acquireMachinesLock for "ha-174833"
	I1030 18:48:38.655866  405809 start.go:96] Skipping create...Using existing machine configuration
	I1030 18:48:38.655876  405809 fix.go:54] fixHost starting: 
	I1030 18:48:38.656176  405809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:48:38.656216  405809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:48:38.670439  405809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I1030 18:48:38.670989  405809 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:48:38.671613  405809 main.go:141] libmachine: Using API Version  1
	I1030 18:48:38.671638  405809 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:48:38.671955  405809 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:48:38.672139  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:38.672264  405809 main.go:141] libmachine: (ha-174833) Calling .GetState
	I1030 18:48:38.673781  405809 fix.go:112] recreateIfNeeded on ha-174833: state=Running err=<nil>
	W1030 18:48:38.673815  405809 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 18:48:38.676700  405809 out.go:177] * Updating the running kvm2 "ha-174833" VM ...
	I1030 18:48:38.678040  405809 machine.go:93] provisionDockerMachine start ...
	I1030 18:48:38.678058  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:38.678270  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:38.680670  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.681065  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:38.681095  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.681219  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:38.681389  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.681520  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.681656  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:38.681782  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:38.681964  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:38.681975  405809 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 18:48:38.791998  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:48:38.792036  405809 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:48:38.792311  405809 buildroot.go:166] provisioning hostname "ha-174833"
	I1030 18:48:38.792344  405809 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:48:38.792512  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:38.795426  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.795880  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:38.795910  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.796047  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:38.796264  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.796436  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.796620  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:38.796798  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:38.797004  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:38.797031  405809 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174833 && echo "ha-174833" | sudo tee /etc/hostname
	I1030 18:48:38.915388  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174833
	
	I1030 18:48:38.915420  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:38.918460  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.918911  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:38.918948  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:38.919186  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:38.919442  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.919629  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:38.919799  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:38.919961  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:38.920188  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:38.920205  405809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174833/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 18:48:39.027855  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 18:48:39.027887  405809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 18:48:39.027912  405809 buildroot.go:174] setting up certificates
	I1030 18:48:39.027925  405809 provision.go:84] configureAuth start
	I1030 18:48:39.027936  405809 main.go:141] libmachine: (ha-174833) Calling .GetMachineName
	I1030 18:48:39.028205  405809 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:48:39.031149  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.031560  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.031585  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.031666  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:39.033957  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.034283  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.034309  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.034453  405809 provision.go:143] copyHostCerts
	I1030 18:48:39.034502  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:48:39.034575  405809 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 18:48:39.034588  405809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 18:48:39.034673  405809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 18:48:39.034800  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:48:39.034825  405809 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 18:48:39.034833  405809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 18:48:39.034870  405809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 18:48:39.035013  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:48:39.035047  405809 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 18:48:39.035064  405809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 18:48:39.035103  405809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 18:48:39.035190  405809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.ha-174833 san=[127.0.0.1 192.168.39.141 ha-174833 localhost minikube]
	I1030 18:48:39.157422  405809 provision.go:177] copyRemoteCerts
	I1030 18:48:39.157485  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 18:48:39.157509  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:39.160296  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.160721  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.160742  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.160924  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:39.161092  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:39.161244  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:39.161330  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:39.240866  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 18:48:39.240935  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 18:48:39.267480  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 18:48:39.267555  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1030 18:48:39.296126  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 18:48:39.296198  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 18:48:39.323229  405809 provision.go:87] duration metric: took 295.289532ms to configureAuth
	I1030 18:48:39.323262  405809 buildroot.go:189] setting minikube options for container-runtime
	I1030 18:48:39.323522  405809 config.go:182] Loaded profile config "ha-174833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:48:39.323616  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:39.326586  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.327021  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:39.327048  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:39.327264  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:39.327450  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:39.327634  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:39.327766  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:39.327926  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:39.328096  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:39.328115  405809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 18:48:44.937692  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 18:48:44.937719  405809 machine.go:96] duration metric: took 6.25966635s to provisionDockerMachine
	I1030 18:48:44.937735  405809 start.go:293] postStartSetup for "ha-174833" (driver="kvm2")
	I1030 18:48:44.937746  405809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 18:48:44.937767  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:44.938146  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 18:48:44.938175  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:44.940752  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:44.941022  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:44.941065  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:44.941197  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:44.941411  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:44.941574  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:44.941724  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:45.021249  405809 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 18:48:45.025660  405809 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 18:48:45.025691  405809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 18:48:45.025757  405809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 18:48:45.025840  405809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 18:48:45.025853  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 18:48:45.025938  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 18:48:45.035109  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:48:45.058527  405809 start.go:296] duration metric: took 120.777886ms for postStartSetup
	I1030 18:48:45.058571  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.058867  405809 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1030 18:48:45.058896  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.061253  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.061585  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.061618  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.061790  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.061972  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.062117  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.062289  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	W1030 18:48:45.140747  405809 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1030 18:48:45.140776  405809 fix.go:56] duration metric: took 6.484902063s for fixHost
	I1030 18:48:45.140807  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.143222  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.143617  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.143639  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.143807  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.144005  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.144177  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.144335  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.144503  405809 main.go:141] libmachine: Using SSH client type: native
	I1030 18:48:45.144669  405809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1030 18:48:45.144679  405809 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 18:48:45.247288  405809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730314125.204909816
	
	I1030 18:48:45.247317  405809 fix.go:216] guest clock: 1730314125.204909816
	I1030 18:48:45.247324  405809 fix.go:229] Guest: 2024-10-30 18:48:45.204909816 +0000 UTC Remote: 2024-10-30 18:48:45.140790956 +0000 UTC m=+6.619784060 (delta=64.11886ms)
	I1030 18:48:45.247348  405809 fix.go:200] guest clock delta is within tolerance: 64.11886ms
	I1030 18:48:45.247353  405809 start.go:83] releasing machines lock for "ha-174833", held for 6.591496411s
	I1030 18:48:45.247372  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.247676  405809 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:48:45.250380  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.250735  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.250769  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.250890  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.251548  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.251724  405809 main.go:141] libmachine: (ha-174833) Calling .DriverName
	I1030 18:48:45.251830  405809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 18:48:45.251868  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.251988  405809 ssh_runner.go:195] Run: cat /version.json
	I1030 18:48:45.252023  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHHostname
	I1030 18:48:45.254186  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.254555  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.254578  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.254597  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.254775  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.254978  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:45.255000  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:45.255009  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.255130  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHPort
	I1030 18:48:45.255205  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.255294  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHKeyPath
	I1030 18:48:45.255360  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:45.255418  405809 main.go:141] libmachine: (ha-174833) Calling .GetSSHUsername
	I1030 18:48:45.255541  405809 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/ha-174833/id_rsa Username:docker}
	I1030 18:48:45.352407  405809 ssh_runner.go:195] Run: systemctl --version
	I1030 18:48:45.358445  405809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 18:48:45.514593  405809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 18:48:45.522449  405809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 18:48:45.522542  405809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 18:48:45.531569  405809 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 18:48:45.531589  405809 start.go:495] detecting cgroup driver to use...
	I1030 18:48:45.531643  405809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 18:48:45.547166  405809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 18:48:45.561360  405809 docker.go:217] disabling cri-docker service (if available) ...
	I1030 18:48:45.561420  405809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 18:48:45.574443  405809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 18:48:45.587594  405809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 18:48:45.725074  405809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 18:48:45.863662  405809 docker.go:233] disabling docker service ...
	I1030 18:48:45.863747  405809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 18:48:45.879212  405809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 18:48:45.893161  405809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 18:48:46.035262  405809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 18:48:46.172338  405809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 18:48:46.185584  405809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 18:48:46.204870  405809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 18:48:46.204946  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.214939  405809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 18:48:46.215007  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.224869  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.234575  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.244189  405809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 18:48:46.254255  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.264010  405809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.275071  405809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 18:48:46.284990  405809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 18:48:46.294149  405809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 18:48:46.303524  405809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:48:46.439828  405809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 18:48:52.179761  405809 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.739889395s)
	I1030 18:48:52.179805  405809 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 18:48:52.179870  405809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 18:48:52.185302  405809 start.go:563] Will wait 60s for crictl version
	I1030 18:48:52.185355  405809 ssh_runner.go:195] Run: which crictl
	I1030 18:48:52.190886  405809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 18:48:52.225783  405809 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 18:48:52.225874  405809 ssh_runner.go:195] Run: crio --version
	I1030 18:48:52.255493  405809 ssh_runner.go:195] Run: crio --version
	I1030 18:48:52.286030  405809 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 18:48:52.287571  405809 main.go:141] libmachine: (ha-174833) Calling .GetIP
	I1030 18:48:52.290148  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:52.290585  405809 main.go:141] libmachine: (ha-174833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:5e:ca", ip: ""} in network mk-ha-174833: {Iface:virbr1 ExpiryTime:2024-10-30 19:39:27 +0000 UTC Type:0 Mac:52:54:00:fd:5e:ca Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-174833 Clientid:01:52:54:00:fd:5e:ca}
	I1030 18:48:52.290606  405809 main.go:141] libmachine: (ha-174833) DBG | domain ha-174833 has defined IP address 192.168.39.141 and MAC address 52:54:00:fd:5e:ca in network mk-ha-174833
	I1030 18:48:52.290812  405809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 18:48:52.295713  405809 kubeadm.go:883] updating cluster {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 18:48:52.295891  405809 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:48:52.295950  405809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:48:52.341835  405809 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:48:52.341861  405809 crio.go:433] Images already preloaded, skipping extraction
	I1030 18:48:52.341913  405809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 18:48:52.377228  405809 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 18:48:52.377255  405809 cache_images.go:84] Images are preloaded, skipping loading
	I1030 18:48:52.377270  405809 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.2 crio true true} ...
	I1030 18:48:52.377398  405809 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 18:48:52.377496  405809 ssh_runner.go:195] Run: crio config
	I1030 18:48:52.427574  405809 cni.go:84] Creating CNI manager for ""
	I1030 18:48:52.427607  405809 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1030 18:48:52.427621  405809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 18:48:52.427664  405809 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174833 NodeName:ha-174833 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 18:48:52.427829  405809 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174833"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 18:48:52.427857  405809 kube-vip.go:115] generating kube-vip config ...
	I1030 18:48:52.427913  405809 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1030 18:48:52.440406  405809 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1030 18:48:52.440524  405809 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1030 18:48:52.440582  405809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 18:48:52.450600  405809 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 18:48:52.450666  405809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1030 18:48:52.460260  405809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1030 18:48:52.476571  405809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 18:48:52.492782  405809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1030 18:48:52.508573  405809 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1030 18:48:52.528895  405809 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1030 18:48:52.532947  405809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 18:48:52.671968  405809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 18:48:52.687009  405809 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833 for IP: 192.168.39.141
	I1030 18:48:52.687034  405809 certs.go:194] generating shared ca certs ...
	I1030 18:48:52.687052  405809 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:48:52.687242  405809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 18:48:52.687299  405809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 18:48:52.687313  405809 certs.go:256] generating profile certs ...
	I1030 18:48:52.687417  405809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/client.key
	I1030 18:48:52.687450  405809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547
	I1030 18:48:52.687472  405809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141 192.168.39.67 192.168.39.238 192.168.39.254]
	I1030 18:48:52.838941  405809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547 ...
	I1030 18:48:52.838980  405809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547: {Name:mk5856b10a29cc4bdc3c17d5e90cfc2c8c466cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:48:52.839188  405809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547 ...
	I1030 18:48:52.839205  405809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547: {Name:mkc99e20ca22843d24c345fcec0771c78bd2ed96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 18:48:52.839304  405809 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt.b8dbd547 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt
	I1030 18:48:52.839518  405809 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key.b8dbd547 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key
	I1030 18:48:52.839704  405809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key
	I1030 18:48:52.839725  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 18:48:52.839742  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 18:48:52.839761  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 18:48:52.839779  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 18:48:52.839800  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 18:48:52.839826  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 18:48:52.839840  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 18:48:52.839854  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 18:48:52.839921  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 18:48:52.839955  405809 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 18:48:52.839965  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 18:48:52.839991  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 18:48:52.840014  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 18:48:52.840038  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 18:48:52.840080  405809 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 18:48:52.840109  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 18:48:52.840123  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:52.840135  405809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 18:48:52.840837  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 18:48:52.866241  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 18:48:52.889998  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 18:48:52.914242  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 18:48:52.937565  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 18:48:52.961504  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 18:48:52.985113  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 18:48:53.008692  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/ha-174833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1030 18:48:53.032147  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 18:48:53.055678  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 18:48:53.078148  405809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 18:48:53.101424  405809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 18:48:53.117631  405809 ssh_runner.go:195] Run: openssl version
	I1030 18:48:53.123504  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 18:48:53.134261  405809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 18:48:53.138656  405809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 18:48:53.138712  405809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 18:48:53.144313  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 18:48:53.153623  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 18:48:53.165094  405809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:53.169642  405809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:53.169695  405809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 18:48:53.175484  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 18:48:53.185258  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 18:48:53.196105  405809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 18:48:53.200467  405809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 18:48:53.200505  405809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 18:48:53.205961  405809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 18:48:53.215513  405809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 18:48:53.219863  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 18:48:53.225424  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 18:48:53.230960  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 18:48:53.236446  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 18:48:53.242187  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 18:48:53.247733  405809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 18:48:53.253171  405809 kubeadm.go:392] StartCluster: {Name:ha-174833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-174833 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:48:53.253353  405809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 18:48:53.253421  405809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 18:48:53.294279  405809 cri.go:89] found id: "7849ab28d6c691374252169550d3d021bd1631d65174aa6a807f5ebc3396154c"
	I1030 18:48:53.294318  405809 cri.go:89] found id: "e3c793bf4653d359eff5aba90cb22c8adf85630cb953511e47067946527a1eac"
	I1030 18:48:53.294325  405809 cri.go:89] found id: "a468c79700aa34918090d87cf32ed72f1d49f5b75dae53935cb3982ce827f5d5"
	I1030 18:48:53.294329  405809 cri.go:89] found id: "07374565cf0faf4679e84e01467f01d341a24035c230d69813103d9a9d744ec5"
	I1030 18:48:53.294334  405809 cri.go:89] found id: "b50f8293a0eac5d428cb2cdd59140c816100bc3a83890343a8ecc1a0e0699009"
	I1030 18:48:53.294338  405809 cri.go:89] found id: "80919506252b4e3df3db12ac62352ffcfc65a784996bbef7adcefc782480a86f"
	I1030 18:48:53.294343  405809 cri.go:89] found id: "46301d1401a148dee81e2e167582dcebc8e9ce533e3150308e7eb51799e4f1ef"
	I1030 18:48:53.294353  405809 cri.go:89] found id: "634060e657ba25894ea35d4e8b8429b57de139396a52f53af88876e683de5740"
	I1030 18:48:53.294360  405809 cri.go:89] found id: "da8b9126272c4605fcadd37917b2874bb833efc9c16584c83ac4d1307f59c62a"
	I1030 18:48:53.294373  405809 cri.go:89] found id: "6f0fb508f1f86d4bc039adab40689367f34f5141a90043b092e43663b1f959a6"
	I1030 18:48:53.294380  405809 cri.go:89] found id: "db863ebdc17e0d81937b5738a5fe433285fd91846fa54c761cf0cd4cd4987b73"
	I1030 18:48:53.294385  405809 cri.go:89] found id: "381be95e92ca6d096c838a11caffeef70b5bdf29633660086db3a591af3efa3c"
	I1030 18:48:53.294391  405809 cri.go:89] found id: "661ed7108dbf576219f0d0850f1ff6e5884d78718a480882365b8f283ab57acb"
	I1030 18:48:53.294397  405809 cri.go:89] found id: ""
	I1030 18:48:53.294452  405809 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174833 -n ha-174833
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174833 -n ha-174833: exit status 2 (220.322865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-174833" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (158.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-743795
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-743795
E1030 19:08:17.246137  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:08:21.783653  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-743795: exit status 82 (2m1.875788666s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-743795-m03"  ...
	* Stopping node "multinode-743795-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-743795" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-743795 --wait=true -v=8 --alsologtostderr
E1030 19:10:18.709860  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-743795 --wait=true -v=8 --alsologtostderr: (3m24.164662778s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-743795
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-743795 -n multinode-743795
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-743795 logs -n 25: (2.079204445s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1456195063/001/cp-test_multinode-743795-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795:/home/docker/cp-test_multinode-743795-m02_multinode-743795.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795 sudo cat                                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m02_multinode-743795.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03:/home/docker/cp-test_multinode-743795-m02_multinode-743795-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795-m03 sudo cat                                   | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m02_multinode-743795-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp testdata/cp-test.txt                                                | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1456195063/001/cp-test_multinode-743795-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795:/home/docker/cp-test_multinode-743795-m03_multinode-743795.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795 sudo cat                                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m03_multinode-743795.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02:/home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795-m02 sudo cat                                   | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-743795 node stop m03                                                          | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	| node    | multinode-743795 node start                                                             | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-743795                                                                | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:07 UTC |                     |
	| stop    | -p multinode-743795                                                                     | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:07 UTC |                     |
	| start   | -p multinode-743795                                                                     | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:09 UTC | 30 Oct 24 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-743795                                                                | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:09:03
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:09:03.346400  417097 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:09:03.346643  417097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:09:03.346652  417097 out.go:358] Setting ErrFile to fd 2...
	I1030 19:09:03.346657  417097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:09:03.346856  417097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:09:03.347388  417097 out.go:352] Setting JSON to false
	I1030 19:09:03.348384  417097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10286,"bootTime":1730305057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:09:03.348492  417097 start.go:139] virtualization: kvm guest
	I1030 19:09:03.351827  417097 out.go:177] * [multinode-743795] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:09:03.353718  417097 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:09:03.353738  417097 notify.go:220] Checking for updates...
	I1030 19:09:03.357035  417097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:09:03.358683  417097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:09:03.360103  417097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:09:03.361495  417097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:09:03.362932  417097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:09:03.364573  417097 config.go:182] Loaded profile config "multinode-743795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:09:03.364694  417097 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:09:03.365214  417097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:09:03.365276  417097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:09:03.381510  417097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I1030 19:09:03.382091  417097 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:09:03.382805  417097 main.go:141] libmachine: Using API Version  1
	I1030 19:09:03.382831  417097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:09:03.383221  417097 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:09:03.383457  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:09:03.419252  417097 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:09:03.420656  417097 start.go:297] selected driver: kvm2
	I1030 19:09:03.420672  417097 start.go:901] validating driver "kvm2" against &{Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:09:03.420863  417097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:09:03.421296  417097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:09:03.421386  417097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:09:03.436624  417097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:09:03.437321  417097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:09:03.437356  417097 cni.go:84] Creating CNI manager for ""
	I1030 19:09:03.437424  417097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1030 19:09:03.437505  417097 start.go:340] cluster config:
	{Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-743795 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:09:03.437649  417097 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:09:03.439589  417097 out.go:177] * Starting "multinode-743795" primary control-plane node in "multinode-743795" cluster
	I1030 19:09:03.440977  417097 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:09:03.441046  417097 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 19:09:03.441066  417097 cache.go:56] Caching tarball of preloaded images
	I1030 19:09:03.441153  417097 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:09:03.441191  417097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 19:09:03.441330  417097 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/config.json ...
	I1030 19:09:03.441589  417097 start.go:360] acquireMachinesLock for multinode-743795: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:09:03.441643  417097 start.go:364] duration metric: took 27.33µs to acquireMachinesLock for "multinode-743795"
	I1030 19:09:03.441657  417097 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:09:03.441670  417097 fix.go:54] fixHost starting: 
	I1030 19:09:03.441918  417097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:09:03.441956  417097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:09:03.456427  417097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37897
	I1030 19:09:03.456802  417097 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:09:03.457323  417097 main.go:141] libmachine: Using API Version  1
	I1030 19:09:03.457342  417097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:09:03.457664  417097 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:09:03.457897  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:09:03.458087  417097 main.go:141] libmachine: (multinode-743795) Calling .GetState
	I1030 19:09:03.459585  417097 fix.go:112] recreateIfNeeded on multinode-743795: state=Running err=<nil>
	W1030 19:09:03.459614  417097 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:09:03.461462  417097 out.go:177] * Updating the running kvm2 "multinode-743795" VM ...
	I1030 19:09:03.462836  417097 machine.go:93] provisionDockerMachine start ...
	I1030 19:09:03.462854  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:09:03.463073  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.465540  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.465944  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.465976  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.466108  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:03.466292  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.466423  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.466570  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:03.466747  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:03.467010  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:03.467028  417097 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:09:03.588207  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-743795
	
	I1030 19:09:03.588245  417097 main.go:141] libmachine: (multinode-743795) Calling .GetMachineName
	I1030 19:09:03.588546  417097 buildroot.go:166] provisioning hostname "multinode-743795"
	I1030 19:09:03.588582  417097 main.go:141] libmachine: (multinode-743795) Calling .GetMachineName
	I1030 19:09:03.588762  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.591619  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.591993  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.592030  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.592153  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:03.592324  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.592671  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.592823  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:03.592967  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:03.593156  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:03.593168  417097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-743795 && echo "multinode-743795" | sudo tee /etc/hostname
	I1030 19:09:03.721845  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-743795
	
	I1030 19:09:03.721881  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.724764  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.725119  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.725137  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.725341  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:03.725509  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.725662  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.725829  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:03.726094  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:03.726316  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:03.726347  417097 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-743795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-743795/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-743795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:09:03.843414  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:09:03.843448  417097 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:09:03.843481  417097 buildroot.go:174] setting up certificates
	I1030 19:09:03.843493  417097 provision.go:84] configureAuth start
	I1030 19:09:03.843505  417097 main.go:141] libmachine: (multinode-743795) Calling .GetMachineName
	I1030 19:09:03.843811  417097 main.go:141] libmachine: (multinode-743795) Calling .GetIP
	I1030 19:09:03.846465  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.846928  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.846955  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.847100  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.849287  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.849621  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.849637  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.849771  417097 provision.go:143] copyHostCerts
	I1030 19:09:03.849800  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:09:03.849843  417097 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:09:03.849858  417097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:09:03.849924  417097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:09:03.850021  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:09:03.850040  417097 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:09:03.850047  417097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:09:03.850072  417097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:09:03.850130  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:09:03.850146  417097 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:09:03.850153  417097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:09:03.850173  417097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:09:03.850233  417097 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.multinode-743795 san=[127.0.0.1 192.168.39.241 localhost minikube multinode-743795]
	I1030 19:09:04.095389  417097 provision.go:177] copyRemoteCerts
	I1030 19:09:04.095453  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:09:04.095480  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:04.098235  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.098721  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:04.098754  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.098937  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:04.099190  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:04.099339  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:04.099477  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:09:04.190390  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 19:09:04.190460  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:09:04.215582  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 19:09:04.215649  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:09:04.239505  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 19:09:04.239579  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1030 19:09:04.264899  417097 provision.go:87] duration metric: took 421.391175ms to configureAuth
	I1030 19:09:04.264931  417097 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:09:04.265222  417097 config.go:182] Loaded profile config "multinode-743795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:09:04.265314  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:04.268310  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.268688  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:04.268711  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.268844  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:04.269124  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:04.269269  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:04.269421  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:04.269564  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:04.269737  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:04.269750  417097 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:10:34.945481  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:10:34.945513  417097 machine.go:96] duration metric: took 1m31.482664425s to provisionDockerMachine
	I1030 19:10:34.945534  417097 start.go:293] postStartSetup for "multinode-743795" (driver="kvm2")
	I1030 19:10:34.945558  417097 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:10:34.945585  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:34.945878  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:10:34.945914  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:34.949014  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:34.949476  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:34.949526  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:34.949649  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:34.949867  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:34.950034  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:34.950203  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:10:35.038497  417097 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:10:35.042561  417097 command_runner.go:130] > NAME=Buildroot
	I1030 19:10:35.042577  417097 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1030 19:10:35.042582  417097 command_runner.go:130] > ID=buildroot
	I1030 19:10:35.042587  417097 command_runner.go:130] > VERSION_ID=2023.02.9
	I1030 19:10:35.042602  417097 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1030 19:10:35.042885  417097 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:10:35.042907  417097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:10:35.042965  417097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:10:35.043060  417097 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:10:35.043070  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 19:10:35.043164  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:10:35.052923  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:10:35.077304  417097 start.go:296] duration metric: took 131.745248ms for postStartSetup
	I1030 19:10:35.077369  417097 fix.go:56] duration metric: took 1m31.635698193s for fixHost
	I1030 19:10:35.077399  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:35.080248  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.080717  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.080749  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.080936  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:35.081224  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.081395  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.081514  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:35.081668  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:10:35.081853  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:10:35.081866  417097 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:10:35.195307  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730315435.164013525
	
	I1030 19:10:35.195343  417097 fix.go:216] guest clock: 1730315435.164013525
	I1030 19:10:35.195355  417097 fix.go:229] Guest: 2024-10-30 19:10:35.164013525 +0000 UTC Remote: 2024-10-30 19:10:35.077375603 +0000 UTC m=+91.772522355 (delta=86.637922ms)
	I1030 19:10:35.195387  417097 fix.go:200] guest clock delta is within tolerance: 86.637922ms
	I1030 19:10:35.195396  417097 start.go:83] releasing machines lock for "multinode-743795", held for 1m31.75374356s
	I1030 19:10:35.195426  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.195710  417097 main.go:141] libmachine: (multinode-743795) Calling .GetIP
	I1030 19:10:35.198527  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.198976  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.199008  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.199081  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.199704  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.199893  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.199991  417097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:10:35.200047  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:35.200092  417097 ssh_runner.go:195] Run: cat /version.json
	I1030 19:10:35.200120  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:35.202729  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.202872  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.203137  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.203165  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.203274  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:35.203398  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.203418  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.203418  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.203559  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:35.203637  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:35.203830  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:10:35.203846  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.204002  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:35.204147  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:10:35.306131  417097 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1030 19:10:35.306227  417097 command_runner.go:130] > {"iso_version": "v1.34.0-1730282777-19883", "kicbase_version": "v0.0.45-1730110049-19872", "minikube_version": "v1.34.0", "commit": "7738213fbe7cb3f4867f3e3b534798700ea0e3fb"}
	I1030 19:10:35.306376  417097 ssh_runner.go:195] Run: systemctl --version
	I1030 19:10:35.312360  417097 command_runner.go:130] > systemd 252 (252)
	I1030 19:10:35.312420  417097 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1030 19:10:35.312532  417097 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:10:35.481134  417097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1030 19:10:35.489226  417097 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1030 19:10:35.489576  417097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:10:35.489644  417097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:10:35.501005  417097 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 19:10:35.501036  417097 start.go:495] detecting cgroup driver to use...
	I1030 19:10:35.501137  417097 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:10:35.517939  417097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:10:35.531469  417097 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:10:35.531539  417097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:10:35.544690  417097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:10:35.558315  417097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:10:35.714305  417097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:10:35.870178  417097 docker.go:233] disabling docker service ...
	I1030 19:10:35.870267  417097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:10:35.891238  417097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:10:35.905412  417097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:10:36.049787  417097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:10:36.195346  417097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:10:36.208762  417097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:10:36.227419  417097 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1030 19:10:36.227460  417097 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:10:36.227515  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.238017  417097 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:10:36.238085  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.248499  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.258812  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.268854  417097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:10:36.279085  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.289679  417097 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.300625  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.311289  417097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:10:36.321350  417097 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1030 19:10:36.321515  417097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:10:36.330963  417097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:10:36.467659  417097 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:10:36.662324  417097 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:10:36.662410  417097 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:10:36.667598  417097 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1030 19:10:36.667627  417097 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1030 19:10:36.667637  417097 command_runner.go:130] > Device: 0,22	Inode: 1265        Links: 1
	I1030 19:10:36.667648  417097 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 19:10:36.667656  417097 command_runner.go:130] > Access: 2024-10-30 19:10:36.529385148 +0000
	I1030 19:10:36.667666  417097 command_runner.go:130] > Modify: 2024-10-30 19:10:36.529385148 +0000
	I1030 19:10:36.667674  417097 command_runner.go:130] > Change: 2024-10-30 19:10:36.529385148 +0000
	I1030 19:10:36.667681  417097 command_runner.go:130] >  Birth: -
	I1030 19:10:36.668009  417097 start.go:563] Will wait 60s for crictl version
	I1030 19:10:36.668074  417097 ssh_runner.go:195] Run: which crictl
	I1030 19:10:36.671769  417097 command_runner.go:130] > /usr/bin/crictl
	I1030 19:10:36.671981  417097 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:10:36.709417  417097 command_runner.go:130] > Version:  0.1.0
	I1030 19:10:36.709446  417097 command_runner.go:130] > RuntimeName:  cri-o
	I1030 19:10:36.709453  417097 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1030 19:10:36.709460  417097 command_runner.go:130] > RuntimeApiVersion:  v1
	I1030 19:10:36.710680  417097 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:10:36.710763  417097 ssh_runner.go:195] Run: crio --version
	I1030 19:10:36.737232  417097 command_runner.go:130] > crio version 1.29.1
	I1030 19:10:36.737253  417097 command_runner.go:130] > Version:        1.29.1
	I1030 19:10:36.737260  417097 command_runner.go:130] > GitCommit:      unknown
	I1030 19:10:36.737264  417097 command_runner.go:130] > GitCommitDate:  unknown
	I1030 19:10:36.737268  417097 command_runner.go:130] > GitTreeState:   clean
	I1030 19:10:36.737276  417097 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1030 19:10:36.737282  417097 command_runner.go:130] > GoVersion:      go1.21.6
	I1030 19:10:36.737289  417097 command_runner.go:130] > Compiler:       gc
	I1030 19:10:36.737297  417097 command_runner.go:130] > Platform:       linux/amd64
	I1030 19:10:36.737303  417097 command_runner.go:130] > Linkmode:       dynamic
	I1030 19:10:36.737314  417097 command_runner.go:130] > BuildTags:      
	I1030 19:10:36.737320  417097 command_runner.go:130] >   containers_image_ostree_stub
	I1030 19:10:36.737326  417097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1030 19:10:36.737330  417097 command_runner.go:130] >   btrfs_noversion
	I1030 19:10:36.737337  417097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1030 19:10:36.737342  417097 command_runner.go:130] >   libdm_no_deferred_remove
	I1030 19:10:36.737347  417097 command_runner.go:130] >   seccomp
	I1030 19:10:36.737352  417097 command_runner.go:130] > LDFlags:          unknown
	I1030 19:10:36.737359  417097 command_runner.go:130] > SeccompEnabled:   true
	I1030 19:10:36.737363  417097 command_runner.go:130] > AppArmorEnabled:  false
	I1030 19:10:36.738360  417097 ssh_runner.go:195] Run: crio --version
	I1030 19:10:36.766906  417097 command_runner.go:130] > crio version 1.29.1
	I1030 19:10:36.766925  417097 command_runner.go:130] > Version:        1.29.1
	I1030 19:10:36.766930  417097 command_runner.go:130] > GitCommit:      unknown
	I1030 19:10:36.766934  417097 command_runner.go:130] > GitCommitDate:  unknown
	I1030 19:10:36.766938  417097 command_runner.go:130] > GitTreeState:   clean
	I1030 19:10:36.766944  417097 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1030 19:10:36.766948  417097 command_runner.go:130] > GoVersion:      go1.21.6
	I1030 19:10:36.766952  417097 command_runner.go:130] > Compiler:       gc
	I1030 19:10:36.766957  417097 command_runner.go:130] > Platform:       linux/amd64
	I1030 19:10:36.766961  417097 command_runner.go:130] > Linkmode:       dynamic
	I1030 19:10:36.767001  417097 command_runner.go:130] > BuildTags:      
	I1030 19:10:36.767019  417097 command_runner.go:130] >   containers_image_ostree_stub
	I1030 19:10:36.767023  417097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1030 19:10:36.767032  417097 command_runner.go:130] >   btrfs_noversion
	I1030 19:10:36.767045  417097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1030 19:10:36.767049  417097 command_runner.go:130] >   libdm_no_deferred_remove
	I1030 19:10:36.767054  417097 command_runner.go:130] >   seccomp
	I1030 19:10:36.767058  417097 command_runner.go:130] > LDFlags:          unknown
	I1030 19:10:36.767063  417097 command_runner.go:130] > SeccompEnabled:   true
	I1030 19:10:36.767067  417097 command_runner.go:130] > AppArmorEnabled:  false
	I1030 19:10:36.770246  417097 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:10:36.771781  417097 main.go:141] libmachine: (multinode-743795) Calling .GetIP
	I1030 19:10:36.774310  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:36.774678  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:36.774707  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:36.774896  417097 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:10:36.779096  417097 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1030 19:10:36.779217  417097 kubeadm.go:883] updating cluster {Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:10:36.779383  417097 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:10:36.779426  417097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:10:36.820444  417097 command_runner.go:130] > {
	I1030 19:10:36.820477  417097 command_runner.go:130] >   "images": [
	I1030 19:10:36.820484  417097 command_runner.go:130] >     {
	I1030 19:10:36.820496  417097 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1030 19:10:36.820504  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820513  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1030 19:10:36.820519  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820525  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820538  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1030 19:10:36.820553  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1030 19:10:36.820558  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820567  417097 command_runner.go:130] >       "size": "94965812",
	I1030 19:10:36.820573  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.820583  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.820593  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.820600  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.820609  417097 command_runner.go:130] >     },
	I1030 19:10:36.820614  417097 command_runner.go:130] >     {
	I1030 19:10:36.820624  417097 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1030 19:10:36.820631  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820639  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1030 19:10:36.820648  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820655  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820670  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1030 19:10:36.820683  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1030 19:10:36.820692  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820699  417097 command_runner.go:130] >       "size": "94958644",
	I1030 19:10:36.820708  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.820729  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.820740  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.820747  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.820760  417097 command_runner.go:130] >     },
	I1030 19:10:36.820768  417097 command_runner.go:130] >     {
	I1030 19:10:36.820781  417097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1030 19:10:36.820790  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820802  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1030 19:10:36.820811  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820821  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820835  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1030 19:10:36.820849  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1030 19:10:36.820858  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820867  417097 command_runner.go:130] >       "size": "1363676",
	I1030 19:10:36.820876  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.820885  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.820894  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.820903  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.820913  417097 command_runner.go:130] >     },
	I1030 19:10:36.820920  417097 command_runner.go:130] >     {
	I1030 19:10:36.820933  417097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1030 19:10:36.820942  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820953  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1030 19:10:36.820961  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820967  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820977  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1030 19:10:36.820994  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1030 19:10:36.821002  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821008  417097 command_runner.go:130] >       "size": "31470524",
	I1030 19:10:36.821013  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.821019  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821023  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821030  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821033  417097 command_runner.go:130] >     },
	I1030 19:10:36.821037  417097 command_runner.go:130] >     {
	I1030 19:10:36.821042  417097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1030 19:10:36.821053  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821060  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1030 19:10:36.821064  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821068  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821077  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1030 19:10:36.821086  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1030 19:10:36.821092  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821096  417097 command_runner.go:130] >       "size": "63273227",
	I1030 19:10:36.821100  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.821106  417097 command_runner.go:130] >       "username": "nonroot",
	I1030 19:10:36.821111  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821117  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821120  417097 command_runner.go:130] >     },
	I1030 19:10:36.821126  417097 command_runner.go:130] >     {
	I1030 19:10:36.821132  417097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1030 19:10:36.821138  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821142  417097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1030 19:10:36.821148  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821153  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821162  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1030 19:10:36.821170  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1030 19:10:36.821176  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821182  417097 command_runner.go:130] >       "size": "149009664",
	I1030 19:10:36.821188  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821192  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821198  417097 command_runner.go:130] >       },
	I1030 19:10:36.821202  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821208  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821212  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821218  417097 command_runner.go:130] >     },
	I1030 19:10:36.821221  417097 command_runner.go:130] >     {
	I1030 19:10:36.821228  417097 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1030 19:10:36.821234  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821244  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1030 19:10:36.821271  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821285  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821292  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1030 19:10:36.821299  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1030 19:10:36.821308  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821315  417097 command_runner.go:130] >       "size": "95274464",
	I1030 19:10:36.821319  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821325  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821328  417097 command_runner.go:130] >       },
	I1030 19:10:36.821334  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821339  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821345  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821349  417097 command_runner.go:130] >     },
	I1030 19:10:36.821354  417097 command_runner.go:130] >     {
	I1030 19:10:36.821361  417097 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1030 19:10:36.821367  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821373  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1030 19:10:36.821378  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821382  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821405  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1030 19:10:36.821415  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1030 19:10:36.821421  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821425  417097 command_runner.go:130] >       "size": "89474374",
	I1030 19:10:36.821431  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821435  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821440  417097 command_runner.go:130] >       },
	I1030 19:10:36.821445  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821448  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821452  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821456  417097 command_runner.go:130] >     },
	I1030 19:10:36.821459  417097 command_runner.go:130] >     {
	I1030 19:10:36.821465  417097 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1030 19:10:36.821473  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821478  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1030 19:10:36.821482  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821486  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821492  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1030 19:10:36.821501  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1030 19:10:36.821507  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821512  417097 command_runner.go:130] >       "size": "92783513",
	I1030 19:10:36.821517  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.821521  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821529  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821533  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821539  417097 command_runner.go:130] >     },
	I1030 19:10:36.821550  417097 command_runner.go:130] >     {
	I1030 19:10:36.821561  417097 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1030 19:10:36.821567  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821572  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1030 19:10:36.821578  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821582  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821591  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1030 19:10:36.821601  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1030 19:10:36.821606  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821611  417097 command_runner.go:130] >       "size": "68457798",
	I1030 19:10:36.821616  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821620  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821626  417097 command_runner.go:130] >       },
	I1030 19:10:36.821629  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821633  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821640  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821643  417097 command_runner.go:130] >     },
	I1030 19:10:36.821647  417097 command_runner.go:130] >     {
	I1030 19:10:36.821653  417097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1030 19:10:36.821659  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821668  417097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1030 19:10:36.821675  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821682  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821695  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1030 19:10:36.821709  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1030 19:10:36.821717  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821726  417097 command_runner.go:130] >       "size": "742080",
	I1030 19:10:36.821732  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821741  417097 command_runner.go:130] >         "value": "65535"
	I1030 19:10:36.821748  417097 command_runner.go:130] >       },
	I1030 19:10:36.821755  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821763  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821772  417097 command_runner.go:130] >       "pinned": true
	I1030 19:10:36.821778  417097 command_runner.go:130] >     }
	I1030 19:10:36.821785  417097 command_runner.go:130] >   ]
	I1030 19:10:36.821789  417097 command_runner.go:130] > }
	I1030 19:10:36.821996  417097 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:10:36.822008  417097 crio.go:433] Images already preloaded, skipping extraction
	I1030 19:10:36.822057  417097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:10:36.855958  417097 command_runner.go:130] > {
	I1030 19:10:36.855984  417097 command_runner.go:130] >   "images": [
	I1030 19:10:36.855988  417097 command_runner.go:130] >     {
	I1030 19:10:36.855996  417097 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1030 19:10:36.856002  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856008  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1030 19:10:36.856012  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856018  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856029  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1030 19:10:36.856037  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1030 19:10:36.856040  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856044  417097 command_runner.go:130] >       "size": "94965812",
	I1030 19:10:36.856051  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856055  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856064  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856068  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856074  417097 command_runner.go:130] >     },
	I1030 19:10:36.856077  417097 command_runner.go:130] >     {
	I1030 19:10:36.856083  417097 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1030 19:10:36.856087  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856092  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1030 19:10:36.856098  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856109  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856119  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1030 19:10:36.856126  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1030 19:10:36.856132  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856137  417097 command_runner.go:130] >       "size": "94958644",
	I1030 19:10:36.856143  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856153  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856158  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856162  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856168  417097 command_runner.go:130] >     },
	I1030 19:10:36.856171  417097 command_runner.go:130] >     {
	I1030 19:10:36.856177  417097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1030 19:10:36.856181  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856187  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1030 19:10:36.856193  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856196  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856203  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1030 19:10:36.856210  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1030 19:10:36.856216  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856221  417097 command_runner.go:130] >       "size": "1363676",
	I1030 19:10:36.856225  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856231  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856235  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856239  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856243  417097 command_runner.go:130] >     },
	I1030 19:10:36.856246  417097 command_runner.go:130] >     {
	I1030 19:10:36.856252  417097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1030 19:10:36.856259  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856269  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1030 19:10:36.856275  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856279  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856287  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1030 19:10:36.856301  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1030 19:10:36.856310  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856315  417097 command_runner.go:130] >       "size": "31470524",
	I1030 19:10:36.856321  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856325  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856332  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856336  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856342  417097 command_runner.go:130] >     },
	I1030 19:10:36.856345  417097 command_runner.go:130] >     {
	I1030 19:10:36.856351  417097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1030 19:10:36.856357  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856362  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1030 19:10:36.856368  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856372  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856380  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1030 19:10:36.856391  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1030 19:10:36.856396  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856400  417097 command_runner.go:130] >       "size": "63273227",
	I1030 19:10:36.856404  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856408  417097 command_runner.go:130] >       "username": "nonroot",
	I1030 19:10:36.856412  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856418  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856422  417097 command_runner.go:130] >     },
	I1030 19:10:36.856427  417097 command_runner.go:130] >     {
	I1030 19:10:36.856434  417097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1030 19:10:36.856440  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856445  417097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1030 19:10:36.856451  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856455  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856465  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1030 19:10:36.856475  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1030 19:10:36.856478  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856483  417097 command_runner.go:130] >       "size": "149009664",
	I1030 19:10:36.856489  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856493  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856498  417097 command_runner.go:130] >       },
	I1030 19:10:36.856502  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856507  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856511  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856514  417097 command_runner.go:130] >     },
	I1030 19:10:36.856518  417097 command_runner.go:130] >     {
	I1030 19:10:36.856524  417097 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1030 19:10:36.856528  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856533  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1030 19:10:36.856537  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856542  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856550  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1030 19:10:36.856559  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1030 19:10:36.856562  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856567  417097 command_runner.go:130] >       "size": "95274464",
	I1030 19:10:36.856571  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856575  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856578  417097 command_runner.go:130] >       },
	I1030 19:10:36.856582  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856586  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856590  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856594  417097 command_runner.go:130] >     },
	I1030 19:10:36.856599  417097 command_runner.go:130] >     {
	I1030 19:10:36.856605  417097 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1030 19:10:36.856609  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856616  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1030 19:10:36.856621  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856625  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856638  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1030 19:10:36.856647  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1030 19:10:36.856651  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856657  417097 command_runner.go:130] >       "size": "89474374",
	I1030 19:10:36.856662  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856668  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856671  417097 command_runner.go:130] >       },
	I1030 19:10:36.856677  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856681  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856687  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856690  417097 command_runner.go:130] >     },
	I1030 19:10:36.856694  417097 command_runner.go:130] >     {
	I1030 19:10:36.856700  417097 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1030 19:10:36.856706  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856711  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1030 19:10:36.856714  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856718  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856725  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1030 19:10:36.856734  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1030 19:10:36.856738  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856743  417097 command_runner.go:130] >       "size": "92783513",
	I1030 19:10:36.856749  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856753  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856757  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856763  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856766  417097 command_runner.go:130] >     },
	I1030 19:10:36.856772  417097 command_runner.go:130] >     {
	I1030 19:10:36.856778  417097 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1030 19:10:36.856785  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856789  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1030 19:10:36.856795  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856799  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856808  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1030 19:10:36.856817  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1030 19:10:36.856821  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856825  417097 command_runner.go:130] >       "size": "68457798",
	I1030 19:10:36.856830  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856835  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856841  417097 command_runner.go:130] >       },
	I1030 19:10:36.856845  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856849  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856852  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856856  417097 command_runner.go:130] >     },
	I1030 19:10:36.856860  417097 command_runner.go:130] >     {
	I1030 19:10:36.856866  417097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1030 19:10:36.856873  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856877  417097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1030 19:10:36.856881  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856885  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856891  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1030 19:10:36.856899  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1030 19:10:36.856905  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856909  417097 command_runner.go:130] >       "size": "742080",
	I1030 19:10:36.856913  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856916  417097 command_runner.go:130] >         "value": "65535"
	I1030 19:10:36.856920  417097 command_runner.go:130] >       },
	I1030 19:10:36.856924  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856929  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856933  417097 command_runner.go:130] >       "pinned": true
	I1030 19:10:36.856939  417097 command_runner.go:130] >     }
	I1030 19:10:36.856942  417097 command_runner.go:130] >   ]
	I1030 19:10:36.856945  417097 command_runner.go:130] > }
	I1030 19:10:36.857072  417097 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:10:36.857084  417097 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:10:36.857092  417097 kubeadm.go:934] updating node { 192.168.39.241 8443 v1.31.2 crio true true} ...
	I1030 19:10:36.857223  417097 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-743795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:10:36.857332  417097 ssh_runner.go:195] Run: crio config
	I1030 19:10:36.898351  417097 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1030 19:10:36.898398  417097 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1030 19:10:36.898409  417097 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1030 19:10:36.898414  417097 command_runner.go:130] > #
	I1030 19:10:36.898425  417097 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1030 19:10:36.898435  417097 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1030 19:10:36.898448  417097 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1030 19:10:36.898466  417097 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1030 19:10:36.898477  417097 command_runner.go:130] > # reload'.
	I1030 19:10:36.898500  417097 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1030 19:10:36.898518  417097 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1030 19:10:36.898529  417097 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1030 19:10:36.898544  417097 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1030 19:10:36.898554  417097 command_runner.go:130] > [crio]
	I1030 19:10:36.898566  417097 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1030 19:10:36.898576  417097 command_runner.go:130] > # containers images, in this directory.
	I1030 19:10:36.898583  417097 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1030 19:10:36.898597  417097 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1030 19:10:36.898610  417097 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1030 19:10:36.898625  417097 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1030 19:10:36.898635  417097 command_runner.go:130] > # imagestore = ""
	I1030 19:10:36.898646  417097 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1030 19:10:36.898659  417097 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1030 19:10:36.898670  417097 command_runner.go:130] > storage_driver = "overlay"
	I1030 19:10:36.898683  417097 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1030 19:10:36.898694  417097 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1030 19:10:36.898701  417097 command_runner.go:130] > storage_option = [
	I1030 19:10:36.898712  417097 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1030 19:10:36.898719  417097 command_runner.go:130] > ]
	I1030 19:10:36.898731  417097 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1030 19:10:36.898745  417097 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1030 19:10:36.898755  417097 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1030 19:10:36.898766  417097 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1030 19:10:36.898779  417097 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1030 19:10:36.898790  417097 command_runner.go:130] > # always happen on a node reboot
	I1030 19:10:36.898801  417097 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1030 19:10:36.898821  417097 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1030 19:10:36.898831  417097 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1030 19:10:36.898838  417097 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1030 19:10:36.898847  417097 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1030 19:10:36.898861  417097 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1030 19:10:36.898877  417097 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1030 19:10:36.898887  417097 command_runner.go:130] > # internal_wipe = true
	I1030 19:10:36.898900  417097 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1030 19:10:36.898913  417097 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1030 19:10:36.898923  417097 command_runner.go:130] > # internal_repair = false
	I1030 19:10:36.898931  417097 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1030 19:10:36.898946  417097 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1030 19:10:36.898957  417097 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1030 19:10:36.898968  417097 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1030 19:10:36.898981  417097 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1030 19:10:36.898990  417097 command_runner.go:130] > [crio.api]
	I1030 19:10:36.898999  417097 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1030 19:10:36.899011  417097 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1030 19:10:36.899026  417097 command_runner.go:130] > # IP address on which the stream server will listen.
	I1030 19:10:36.899036  417097 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1030 19:10:36.899049  417097 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1030 19:10:36.899058  417097 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1030 19:10:36.899062  417097 command_runner.go:130] > # stream_port = "0"
	I1030 19:10:36.899073  417097 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1030 19:10:36.899083  417097 command_runner.go:130] > # stream_enable_tls = false
	I1030 19:10:36.899093  417097 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1030 19:10:36.899105  417097 command_runner.go:130] > # stream_idle_timeout = ""
	I1030 19:10:36.899116  417097 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1030 19:10:36.899130  417097 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1030 19:10:36.899138  417097 command_runner.go:130] > # minutes.
	I1030 19:10:36.899145  417097 command_runner.go:130] > # stream_tls_cert = ""
	I1030 19:10:36.899157  417097 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1030 19:10:36.899171  417097 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1030 19:10:36.899182  417097 command_runner.go:130] > # stream_tls_key = ""
	I1030 19:10:36.899192  417097 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1030 19:10:36.899205  417097 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1030 19:10:36.899224  417097 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1030 19:10:36.899233  417097 command_runner.go:130] > # stream_tls_ca = ""
	I1030 19:10:36.899245  417097 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1030 19:10:36.899254  417097 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1030 19:10:36.899263  417097 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1030 19:10:36.899274  417097 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1030 19:10:36.899288  417097 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1030 19:10:36.899297  417097 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1030 19:10:36.899308  417097 command_runner.go:130] > [crio.runtime]
	I1030 19:10:36.899320  417097 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1030 19:10:36.899331  417097 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1030 19:10:36.899339  417097 command_runner.go:130] > # "nofile=1024:2048"
	I1030 19:10:36.899346  417097 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1030 19:10:36.899355  417097 command_runner.go:130] > # default_ulimits = [
	I1030 19:10:36.899369  417097 command_runner.go:130] > # ]
	I1030 19:10:36.899380  417097 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1030 19:10:36.899390  417097 command_runner.go:130] > # no_pivot = false
	I1030 19:10:36.899399  417097 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1030 19:10:36.899413  417097 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1030 19:10:36.899424  417097 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1030 19:10:36.899437  417097 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1030 19:10:36.899454  417097 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1030 19:10:36.899468  417097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 19:10:36.899480  417097 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1030 19:10:36.899489  417097 command_runner.go:130] > # Cgroup setting for conmon
	I1030 19:10:36.899501  417097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1030 19:10:36.899510  417097 command_runner.go:130] > conmon_cgroup = "pod"
	I1030 19:10:36.899520  417097 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1030 19:10:36.899533  417097 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1030 19:10:36.899544  417097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 19:10:36.899553  417097 command_runner.go:130] > conmon_env = [
	I1030 19:10:36.899565  417097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 19:10:36.899577  417097 command_runner.go:130] > ]
	I1030 19:10:36.899588  417097 command_runner.go:130] > # Additional environment variables to set for all the
	I1030 19:10:36.899600  417097 command_runner.go:130] > # containers. These are overridden if set in the
	I1030 19:10:36.899612  417097 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1030 19:10:36.899621  417097 command_runner.go:130] > # default_env = [
	I1030 19:10:36.899626  417097 command_runner.go:130] > # ]
	I1030 19:10:36.899640  417097 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1030 19:10:36.899653  417097 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1030 19:10:36.899663  417097 command_runner.go:130] > # selinux = false
	I1030 19:10:36.899674  417097 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1030 19:10:36.899688  417097 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1030 19:10:36.899699  417097 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1030 19:10:36.899707  417097 command_runner.go:130] > # seccomp_profile = ""
	I1030 19:10:36.899718  417097 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1030 19:10:36.899732  417097 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1030 19:10:36.899744  417097 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1030 19:10:36.899754  417097 command_runner.go:130] > # which might increase security.
	I1030 19:10:36.899765  417097 command_runner.go:130] > # This option is currently deprecated,
	I1030 19:10:36.899777  417097 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1030 19:10:36.899787  417097 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1030 19:10:36.899801  417097 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1030 19:10:36.899814  417097 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1030 19:10:36.899828  417097 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1030 19:10:36.899841  417097 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1030 19:10:36.899852  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.899862  417097 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1030 19:10:36.899871  417097 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1030 19:10:36.899881  417097 command_runner.go:130] > # the cgroup blockio controller.
	I1030 19:10:36.899888  417097 command_runner.go:130] > # blockio_config_file = ""
	I1030 19:10:36.899901  417097 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1030 19:10:36.899909  417097 command_runner.go:130] > # blockio parameters.
	I1030 19:10:36.899919  417097 command_runner.go:130] > # blockio_reload = false
	I1030 19:10:36.899929  417097 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1030 19:10:36.899939  417097 command_runner.go:130] > # irqbalance daemon.
	I1030 19:10:36.899947  417097 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1030 19:10:36.899960  417097 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1030 19:10:36.899974  417097 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1030 19:10:36.899986  417097 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1030 19:10:36.899998  417097 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1030 19:10:36.900011  417097 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1030 19:10:36.900023  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.900032  417097 command_runner.go:130] > # rdt_config_file = ""
	I1030 19:10:36.900041  417097 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1030 19:10:36.900051  417097 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1030 19:10:36.900071  417097 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1030 19:10:36.900083  417097 command_runner.go:130] > # separate_pull_cgroup = ""
	I1030 19:10:36.900094  417097 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1030 19:10:36.900106  417097 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1030 19:10:36.900116  417097 command_runner.go:130] > # will be added.
	I1030 19:10:36.900123  417097 command_runner.go:130] > # default_capabilities = [
	I1030 19:10:36.900135  417097 command_runner.go:130] > # 	"CHOWN",
	I1030 19:10:36.900141  417097 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1030 19:10:36.900147  417097 command_runner.go:130] > # 	"FSETID",
	I1030 19:10:36.900156  417097 command_runner.go:130] > # 	"FOWNER",
	I1030 19:10:36.900163  417097 command_runner.go:130] > # 	"SETGID",
	I1030 19:10:36.900171  417097 command_runner.go:130] > # 	"SETUID",
	I1030 19:10:36.900175  417097 command_runner.go:130] > # 	"SETPCAP",
	I1030 19:10:36.900182  417097 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1030 19:10:36.900186  417097 command_runner.go:130] > # 	"KILL",
	I1030 19:10:36.900189  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900198  417097 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1030 19:10:36.900211  417097 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1030 19:10:36.900222  417097 command_runner.go:130] > # add_inheritable_capabilities = false
	I1030 19:10:36.900234  417097 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1030 19:10:36.900247  417097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 19:10:36.900257  417097 command_runner.go:130] > default_sysctls = [
	I1030 19:10:36.900265  417097 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1030 19:10:36.900272  417097 command_runner.go:130] > ]
	I1030 19:10:36.900280  417097 command_runner.go:130] > # List of devices on the host that a
	I1030 19:10:36.900292  417097 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1030 19:10:36.900299  417097 command_runner.go:130] > # allowed_devices = [
	I1030 19:10:36.900303  417097 command_runner.go:130] > # 	"/dev/fuse",
	I1030 19:10:36.900309  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900317  417097 command_runner.go:130] > # List of additional devices. specified as
	I1030 19:10:36.900332  417097 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1030 19:10:36.900345  417097 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1030 19:10:36.900361  417097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 19:10:36.900372  417097 command_runner.go:130] > # additional_devices = [
	I1030 19:10:36.900376  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900384  417097 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1030 19:10:36.900394  417097 command_runner.go:130] > # cdi_spec_dirs = [
	I1030 19:10:36.900401  417097 command_runner.go:130] > # 	"/etc/cdi",
	I1030 19:10:36.900410  417097 command_runner.go:130] > # 	"/var/run/cdi",
	I1030 19:10:36.900416  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900429  417097 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1030 19:10:36.900442  417097 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1030 19:10:36.900453  417097 command_runner.go:130] > # Defaults to false.
	I1030 19:10:36.900464  417097 command_runner.go:130] > # device_ownership_from_security_context = false
	I1030 19:10:36.900478  417097 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1030 19:10:36.900488  417097 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1030 19:10:36.900497  417097 command_runner.go:130] > # hooks_dir = [
	I1030 19:10:36.900505  417097 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1030 19:10:36.900513  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900522  417097 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1030 19:10:36.900535  417097 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1030 19:10:36.900545  417097 command_runner.go:130] > # its default mounts from the following two files:
	I1030 19:10:36.900549  417097 command_runner.go:130] > #
	I1030 19:10:36.900557  417097 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1030 19:10:36.900593  417097 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1030 19:10:36.900607  417097 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1030 19:10:36.900612  417097 command_runner.go:130] > #
	I1030 19:10:36.900622  417097 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1030 19:10:36.900632  417097 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1030 19:10:36.900641  417097 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1030 19:10:36.900650  417097 command_runner.go:130] > #      only add mounts it finds in this file.
	I1030 19:10:36.900659  417097 command_runner.go:130] > #
	I1030 19:10:36.900667  417097 command_runner.go:130] > # default_mounts_file = ""
	I1030 19:10:36.900679  417097 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1030 19:10:36.900694  417097 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1030 19:10:36.900703  417097 command_runner.go:130] > pids_limit = 1024
	I1030 19:10:36.900713  417097 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1030 19:10:36.900727  417097 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1030 19:10:36.900740  417097 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1030 19:10:36.900757  417097 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1030 19:10:36.900766  417097 command_runner.go:130] > # log_size_max = -1
	I1030 19:10:36.900777  417097 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1030 19:10:36.900787  417097 command_runner.go:130] > # log_to_journald = false
	I1030 19:10:36.900800  417097 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1030 19:10:36.900807  417097 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1030 19:10:36.900815  417097 command_runner.go:130] > # Path to directory for container attach sockets.
	I1030 19:10:36.900827  417097 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1030 19:10:36.900839  417097 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1030 19:10:36.900848  417097 command_runner.go:130] > # bind_mount_prefix = ""
	I1030 19:10:36.900857  417097 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1030 19:10:36.900867  417097 command_runner.go:130] > # read_only = false
	I1030 19:10:36.900877  417097 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1030 19:10:36.900889  417097 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1030 19:10:36.900896  417097 command_runner.go:130] > # live configuration reload.
	I1030 19:10:36.900907  417097 command_runner.go:130] > # log_level = "info"
	I1030 19:10:36.900916  417097 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1030 19:10:36.900928  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.900934  417097 command_runner.go:130] > # log_filter = ""
	I1030 19:10:36.900944  417097 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1030 19:10:36.900955  417097 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1030 19:10:36.900965  417097 command_runner.go:130] > # separated by comma.
	I1030 19:10:36.900981  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.900991  417097 command_runner.go:130] > # uid_mappings = ""
	I1030 19:10:36.901001  417097 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1030 19:10:36.901013  417097 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1030 19:10:36.901024  417097 command_runner.go:130] > # separated by comma.
	I1030 19:10:36.901038  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.901047  417097 command_runner.go:130] > # gid_mappings = ""
	I1030 19:10:36.901060  417097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1030 19:10:36.901073  417097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 19:10:36.901087  417097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 19:10:36.901103  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.901114  417097 command_runner.go:130] > # minimum_mappable_uid = -1
	I1030 19:10:36.901127  417097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1030 19:10:36.901139  417097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 19:10:36.901159  417097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 19:10:36.901179  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.901190  417097 command_runner.go:130] > # minimum_mappable_gid = -1
	I1030 19:10:36.901200  417097 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1030 19:10:36.901214  417097 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1030 19:10:36.901226  417097 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1030 19:10:36.901235  417097 command_runner.go:130] > # ctr_stop_timeout = 30
	I1030 19:10:36.901245  417097 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1030 19:10:36.901255  417097 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1030 19:10:36.901260  417097 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1030 19:10:36.901270  417097 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1030 19:10:36.901274  417097 command_runner.go:130] > drop_infra_ctr = false
	I1030 19:10:36.901281  417097 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1030 19:10:36.901286  417097 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1030 19:10:36.901294  417097 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1030 19:10:36.901298  417097 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1030 19:10:36.901307  417097 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1030 19:10:36.901312  417097 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1030 19:10:36.901318  417097 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1030 19:10:36.901325  417097 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1030 19:10:36.901331  417097 command_runner.go:130] > # shared_cpuset = ""
	I1030 19:10:36.901337  417097 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1030 19:10:36.901344  417097 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1030 19:10:36.901349  417097 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1030 19:10:36.901361  417097 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1030 19:10:36.901369  417097 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1030 19:10:36.901375  417097 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1030 19:10:36.901383  417097 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1030 19:10:36.901387  417097 command_runner.go:130] > # enable_criu_support = false
	I1030 19:10:36.901394  417097 command_runner.go:130] > # Enable/disable the generation of the container,
	I1030 19:10:36.901400  417097 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1030 19:10:36.901405  417097 command_runner.go:130] > # enable_pod_events = false
	I1030 19:10:36.901411  417097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 19:10:36.901420  417097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 19:10:36.901425  417097 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1030 19:10:36.901431  417097 command_runner.go:130] > # default_runtime = "runc"
	I1030 19:10:36.901436  417097 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1030 19:10:36.901443  417097 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1030 19:10:36.901454  417097 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1030 19:10:36.901461  417097 command_runner.go:130] > # creation as a file is not desired either.
	I1030 19:10:36.901470  417097 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1030 19:10:36.901477  417097 command_runner.go:130] > # the hostname is being managed dynamically.
	I1030 19:10:36.901482  417097 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1030 19:10:36.901485  417097 command_runner.go:130] > # ]
	I1030 19:10:36.901491  417097 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1030 19:10:36.901500  417097 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1030 19:10:36.901506  417097 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1030 19:10:36.901513  417097 command_runner.go:130] > # Each entry in the table should follow the format:
	I1030 19:10:36.901516  417097 command_runner.go:130] > #
	I1030 19:10:36.901521  417097 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1030 19:10:36.901525  417097 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1030 19:10:36.901551  417097 command_runner.go:130] > # runtime_type = "oci"
	I1030 19:10:36.901558  417097 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1030 19:10:36.901562  417097 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1030 19:10:36.901569  417097 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1030 19:10:36.901574  417097 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1030 19:10:36.901578  417097 command_runner.go:130] > # monitor_env = []
	I1030 19:10:36.901583  417097 command_runner.go:130] > # privileged_without_host_devices = false
	I1030 19:10:36.901588  417097 command_runner.go:130] > # allowed_annotations = []
	I1030 19:10:36.901599  417097 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1030 19:10:36.901605  417097 command_runner.go:130] > # Where:
	I1030 19:10:36.901611  417097 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1030 19:10:36.901619  417097 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1030 19:10:36.901625  417097 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1030 19:10:36.901633  417097 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1030 19:10:36.901637  417097 command_runner.go:130] > #   in $PATH.
	I1030 19:10:36.901645  417097 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1030 19:10:36.901650  417097 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1030 19:10:36.901659  417097 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1030 19:10:36.901662  417097 command_runner.go:130] > #   state.
	I1030 19:10:36.901668  417097 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1030 19:10:36.901675  417097 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1030 19:10:36.901683  417097 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1030 19:10:36.901688  417097 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1030 19:10:36.901694  417097 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1030 19:10:36.901701  417097 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1030 19:10:36.901708  417097 command_runner.go:130] > #   The currently recognized values are:
	I1030 19:10:36.901714  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1030 19:10:36.901723  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1030 19:10:36.901729  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1030 19:10:36.901738  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1030 19:10:36.901745  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1030 19:10:36.901753  417097 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1030 19:10:36.901759  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1030 19:10:36.901768  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1030 19:10:36.901775  417097 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1030 19:10:36.901783  417097 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1030 19:10:36.901787  417097 command_runner.go:130] > #   deprecated option "conmon".
	I1030 19:10:36.901794  417097 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1030 19:10:36.901801  417097 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1030 19:10:36.901808  417097 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1030 19:10:36.901815  417097 command_runner.go:130] > #   should be moved to the container's cgroup
	I1030 19:10:36.901821  417097 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1030 19:10:36.901828  417097 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1030 19:10:36.901834  417097 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1030 19:10:36.901841  417097 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1030 19:10:36.901844  417097 command_runner.go:130] > #
	I1030 19:10:36.901849  417097 command_runner.go:130] > # Using the seccomp notifier feature:
	I1030 19:10:36.901855  417097 command_runner.go:130] > #
	I1030 19:10:36.901860  417097 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1030 19:10:36.901868  417097 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1030 19:10:36.901872  417097 command_runner.go:130] > #
	I1030 19:10:36.901877  417097 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1030 19:10:36.901885  417097 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1030 19:10:36.901888  417097 command_runner.go:130] > #
	I1030 19:10:36.901896  417097 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1030 19:10:36.901900  417097 command_runner.go:130] > # feature.
	I1030 19:10:36.901903  417097 command_runner.go:130] > #
	I1030 19:10:36.901909  417097 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1030 19:10:36.901917  417097 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1030 19:10:36.901923  417097 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1030 19:10:36.901931  417097 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1030 19:10:36.901937  417097 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1030 19:10:36.901940  417097 command_runner.go:130] > #
	I1030 19:10:36.901945  417097 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1030 19:10:36.901952  417097 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1030 19:10:36.901956  417097 command_runner.go:130] > #
	I1030 19:10:36.901962  417097 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1030 19:10:36.901969  417097 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1030 19:10:36.901972  417097 command_runner.go:130] > #
	I1030 19:10:36.901978  417097 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1030 19:10:36.901985  417097 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1030 19:10:36.901989  417097 command_runner.go:130] > # limitation.
	I1030 19:10:36.901996  417097 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1030 19:10:36.902001  417097 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1030 19:10:36.902007  417097 command_runner.go:130] > runtime_type = "oci"
	I1030 19:10:36.902011  417097 command_runner.go:130] > runtime_root = "/run/runc"
	I1030 19:10:36.902015  417097 command_runner.go:130] > runtime_config_path = ""
	I1030 19:10:36.902022  417097 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1030 19:10:36.902026  417097 command_runner.go:130] > monitor_cgroup = "pod"
	I1030 19:10:36.902032  417097 command_runner.go:130] > monitor_exec_cgroup = ""
	I1030 19:10:36.902035  417097 command_runner.go:130] > monitor_env = [
	I1030 19:10:36.902042  417097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 19:10:36.902048  417097 command_runner.go:130] > ]
	I1030 19:10:36.902052  417097 command_runner.go:130] > privileged_without_host_devices = false
	I1030 19:10:36.902058  417097 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1030 19:10:36.902064  417097 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1030 19:10:36.902093  417097 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1030 19:10:36.902106  417097 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1030 19:10:36.902117  417097 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1030 19:10:36.902125  417097 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1030 19:10:36.902134  417097 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1030 19:10:36.902144  417097 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1030 19:10:36.902149  417097 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1030 19:10:36.902158  417097 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1030 19:10:36.902162  417097 command_runner.go:130] > # Example:
	I1030 19:10:36.902167  417097 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1030 19:10:36.902172  417097 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1030 19:10:36.902176  417097 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1030 19:10:36.902181  417097 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1030 19:10:36.902185  417097 command_runner.go:130] > # cpuset = 0
	I1030 19:10:36.902188  417097 command_runner.go:130] > # cpushares = "0-1"
	I1030 19:10:36.902191  417097 command_runner.go:130] > # Where:
	I1030 19:10:36.902196  417097 command_runner.go:130] > # The workload name is workload-type.
	I1030 19:10:36.902204  417097 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1030 19:10:36.902209  417097 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1030 19:10:36.902215  417097 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1030 19:10:36.902223  417097 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1030 19:10:36.902228  417097 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1030 19:10:36.902233  417097 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1030 19:10:36.902240  417097 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1030 19:10:36.902244  417097 command_runner.go:130] > # Default value is set to true
	I1030 19:10:36.902248  417097 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1030 19:10:36.902253  417097 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1030 19:10:36.902258  417097 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1030 19:10:36.902262  417097 command_runner.go:130] > # Default value is set to 'false'
	I1030 19:10:36.902266  417097 command_runner.go:130] > # disable_hostport_mapping = false
	I1030 19:10:36.902272  417097 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1030 19:10:36.902275  417097 command_runner.go:130] > #
	I1030 19:10:36.902280  417097 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1030 19:10:36.902285  417097 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1030 19:10:36.902291  417097 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1030 19:10:36.902297  417097 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1030 19:10:36.902302  417097 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1030 19:10:36.902306  417097 command_runner.go:130] > [crio.image]
	I1030 19:10:36.902311  417097 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1030 19:10:36.902316  417097 command_runner.go:130] > # default_transport = "docker://"
	I1030 19:10:36.902321  417097 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1030 19:10:36.902328  417097 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1030 19:10:36.902332  417097 command_runner.go:130] > # global_auth_file = ""
	I1030 19:10:36.902337  417097 command_runner.go:130] > # The image used to instantiate infra containers.
	I1030 19:10:36.902341  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.902348  417097 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1030 19:10:36.902354  417097 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1030 19:10:36.902362  417097 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1030 19:10:36.902367  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.902370  417097 command_runner.go:130] > # pause_image_auth_file = ""
	I1030 19:10:36.902376  417097 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1030 19:10:36.902384  417097 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1030 19:10:36.902390  417097 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1030 19:10:36.902399  417097 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1030 19:10:36.902403  417097 command_runner.go:130] > # pause_command = "/pause"
	I1030 19:10:36.902409  417097 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1030 19:10:36.902415  417097 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1030 19:10:36.902421  417097 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1030 19:10:36.902427  417097 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1030 19:10:36.902433  417097 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1030 19:10:36.902440  417097 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1030 19:10:36.902444  417097 command_runner.go:130] > # pinned_images = [
	I1030 19:10:36.902447  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902453  417097 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1030 19:10:36.902462  417097 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1030 19:10:36.902470  417097 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1030 19:10:36.902476  417097 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1030 19:10:36.902494  417097 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1030 19:10:36.902502  417097 command_runner.go:130] > # signature_policy = ""
	I1030 19:10:36.902513  417097 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1030 19:10:36.902526  417097 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1030 19:10:36.902534  417097 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1030 19:10:36.902540  417097 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1030 19:10:36.902549  417097 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1030 19:10:36.902554  417097 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1030 19:10:36.902563  417097 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1030 19:10:36.902569  417097 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1030 19:10:36.902575  417097 command_runner.go:130] > # changing them here.
	I1030 19:10:36.902579  417097 command_runner.go:130] > # insecure_registries = [
	I1030 19:10:36.902583  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902588  417097 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1030 19:10:36.902594  417097 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1030 19:10:36.902599  417097 command_runner.go:130] > # image_volumes = "mkdir"
	I1030 19:10:36.902604  417097 command_runner.go:130] > # Temporary directory to use for storing big files
	I1030 19:10:36.902610  417097 command_runner.go:130] > # big_files_temporary_dir = ""
	I1030 19:10:36.902616  417097 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1030 19:10:36.902623  417097 command_runner.go:130] > # CNI plugins.
	I1030 19:10:36.902627  417097 command_runner.go:130] > [crio.network]
	I1030 19:10:36.902637  417097 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1030 19:10:36.902644  417097 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1030 19:10:36.902649  417097 command_runner.go:130] > # cni_default_network = ""
	I1030 19:10:36.902655  417097 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1030 19:10:36.902660  417097 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1030 19:10:36.902665  417097 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1030 19:10:36.902671  417097 command_runner.go:130] > # plugin_dirs = [
	I1030 19:10:36.902675  417097 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1030 19:10:36.902678  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902684  417097 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1030 19:10:36.902690  417097 command_runner.go:130] > [crio.metrics]
	I1030 19:10:36.902695  417097 command_runner.go:130] > # Globally enable or disable metrics support.
	I1030 19:10:36.902701  417097 command_runner.go:130] > enable_metrics = true
	I1030 19:10:36.902705  417097 command_runner.go:130] > # Specify enabled metrics collectors.
	I1030 19:10:36.902710  417097 command_runner.go:130] > # Per default all metrics are enabled.
	I1030 19:10:36.902718  417097 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1030 19:10:36.902724  417097 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1030 19:10:36.902731  417097 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1030 19:10:36.902735  417097 command_runner.go:130] > # metrics_collectors = [
	I1030 19:10:36.902741  417097 command_runner.go:130] > # 	"operations",
	I1030 19:10:36.902747  417097 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1030 19:10:36.902754  417097 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1030 19:10:36.902758  417097 command_runner.go:130] > # 	"operations_errors",
	I1030 19:10:36.902763  417097 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1030 19:10:36.902773  417097 command_runner.go:130] > # 	"image_pulls_by_name",
	I1030 19:10:36.902780  417097 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1030 19:10:36.902790  417097 command_runner.go:130] > # 	"image_pulls_failures",
	I1030 19:10:36.902795  417097 command_runner.go:130] > # 	"image_pulls_successes",
	I1030 19:10:36.902799  417097 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1030 19:10:36.902804  417097 command_runner.go:130] > # 	"image_layer_reuse",
	I1030 19:10:36.902810  417097 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1030 19:10:36.902815  417097 command_runner.go:130] > # 	"containers_oom_total",
	I1030 19:10:36.902821  417097 command_runner.go:130] > # 	"containers_oom",
	I1030 19:10:36.902826  417097 command_runner.go:130] > # 	"processes_defunct",
	I1030 19:10:36.902833  417097 command_runner.go:130] > # 	"operations_total",
	I1030 19:10:36.902841  417097 command_runner.go:130] > # 	"operations_latency_seconds",
	I1030 19:10:36.902852  417097 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1030 19:10:36.902862  417097 command_runner.go:130] > # 	"operations_errors_total",
	I1030 19:10:36.902872  417097 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1030 19:10:36.902883  417097 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1030 19:10:36.902892  417097 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1030 19:10:36.902897  417097 command_runner.go:130] > # 	"image_pulls_success_total",
	I1030 19:10:36.902903  417097 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1030 19:10:36.902907  417097 command_runner.go:130] > # 	"containers_oom_count_total",
	I1030 19:10:36.902916  417097 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1030 19:10:36.902923  417097 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1030 19:10:36.902931  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902941  417097 command_runner.go:130] > # The port on which the metrics server will listen.
	I1030 19:10:36.902950  417097 command_runner.go:130] > # metrics_port = 9090
	I1030 19:10:36.902959  417097 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1030 19:10:36.902969  417097 command_runner.go:130] > # metrics_socket = ""
	I1030 19:10:36.902977  417097 command_runner.go:130] > # The certificate for the secure metrics server.
	I1030 19:10:36.902990  417097 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1030 19:10:36.902999  417097 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1030 19:10:36.903004  417097 command_runner.go:130] > # certificate on any modification event.
	I1030 19:10:36.903010  417097 command_runner.go:130] > # metrics_cert = ""
	I1030 19:10:36.903015  417097 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1030 19:10:36.903022  417097 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1030 19:10:36.903026  417097 command_runner.go:130] > # metrics_key = ""
	I1030 19:10:36.903031  417097 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1030 19:10:36.903040  417097 command_runner.go:130] > [crio.tracing]
	I1030 19:10:36.903049  417097 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1030 19:10:36.903060  417097 command_runner.go:130] > # enable_tracing = false
	I1030 19:10:36.903069  417097 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1030 19:10:36.903080  417097 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1030 19:10:36.903094  417097 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1030 19:10:36.903104  417097 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1030 19:10:36.903114  417097 command_runner.go:130] > # CRI-O NRI configuration.
	I1030 19:10:36.903121  417097 command_runner.go:130] > [crio.nri]
	I1030 19:10:36.903127  417097 command_runner.go:130] > # Globally enable or disable NRI.
	I1030 19:10:36.903132  417097 command_runner.go:130] > # enable_nri = false
	I1030 19:10:36.903137  417097 command_runner.go:130] > # NRI socket to listen on.
	I1030 19:10:36.903143  417097 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1030 19:10:36.903152  417097 command_runner.go:130] > # NRI plugin directory to use.
	I1030 19:10:36.903161  417097 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1030 19:10:36.903172  417097 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1030 19:10:36.903183  417097 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1030 19:10:36.903193  417097 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1030 19:10:36.903202  417097 command_runner.go:130] > # nri_disable_connections = false
	I1030 19:10:36.903213  417097 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1030 19:10:36.903221  417097 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1030 19:10:36.903231  417097 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1030 19:10:36.903241  417097 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1030 19:10:36.903255  417097 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1030 19:10:36.903264  417097 command_runner.go:130] > [crio.stats]
	I1030 19:10:36.903276  417097 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1030 19:10:36.903288  417097 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1030 19:10:36.903297  417097 command_runner.go:130] > # stats_collection_period = 0
	I1030 19:10:36.903349  417097 command_runner.go:130] ! time="2024-10-30 19:10:36.858760615Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1030 19:10:36.903379  417097 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1030 19:10:36.903463  417097 cni.go:84] Creating CNI manager for ""
	I1030 19:10:36.903481  417097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1030 19:10:36.903498  417097 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:10:36.903530  417097 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-743795 NodeName:multinode-743795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:10:36.903701  417097 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-743795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.241"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:10:36.903780  417097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:10:36.913914  417097 command_runner.go:130] > kubeadm
	I1030 19:10:36.913936  417097 command_runner.go:130] > kubectl
	I1030 19:10:36.913943  417097 command_runner.go:130] > kubelet
	I1030 19:10:36.914042  417097 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:10:36.914094  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:10:36.923576  417097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1030 19:10:36.940310  417097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:10:36.956331  417097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1030 19:10:36.972514  417097 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I1030 19:10:36.976343  417097 command_runner.go:130] > 192.168.39.241	control-plane.minikube.internal
	I1030 19:10:36.976438  417097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:10:37.115593  417097 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:10:37.130317  417097 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795 for IP: 192.168.39.241
	I1030 19:10:37.130340  417097 certs.go:194] generating shared ca certs ...
	I1030 19:10:37.130358  417097 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:10:37.130557  417097 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:10:37.130619  417097 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:10:37.130635  417097 certs.go:256] generating profile certs ...
	I1030 19:10:37.130736  417097 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/client.key
	I1030 19:10:37.130817  417097 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.key.dc4f52b7
	I1030 19:10:37.130873  417097 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.key
	I1030 19:10:37.130892  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 19:10:37.130914  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 19:10:37.130933  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 19:10:37.130952  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 19:10:37.130970  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 19:10:37.130989  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 19:10:37.131010  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 19:10:37.131028  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 19:10:37.131094  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:10:37.131136  417097 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:10:37.131150  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:10:37.131196  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:10:37.131231  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:10:37.131267  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:10:37.131328  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:10:37.131371  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.131392  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.131411  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.132054  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:10:37.157354  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:10:37.182104  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:10:37.206124  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:10:37.229331  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:10:37.252523  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:10:37.276962  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:10:37.302386  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:10:37.326001  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:10:37.349312  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:10:37.372631  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:10:37.395470  417097 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:10:37.411500  417097 ssh_runner.go:195] Run: openssl version
	I1030 19:10:37.417185  417097 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1030 19:10:37.417502  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:10:37.427822  417097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.432091  417097 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.432200  417097 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.432244  417097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.437624  417097 command_runner.go:130] > 3ec20f2e
	I1030 19:10:37.437793  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:10:37.446512  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:10:37.457156  417097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.461446  417097 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.461631  417097 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.461674  417097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.467148  417097 command_runner.go:130] > b5213941
	I1030 19:10:37.467208  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:10:37.476014  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:10:37.489181  417097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.493699  417097 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.493910  417097 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.493952  417097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.499353  417097 command_runner.go:130] > 51391683
	I1030 19:10:37.499433  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:10:37.508636  417097 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:10:37.513153  417097 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:10:37.513172  417097 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1030 19:10:37.513178  417097 command_runner.go:130] > Device: 253,1	Inode: 9432622     Links: 1
	I1030 19:10:37.513184  417097 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 19:10:37.513191  417097 command_runner.go:130] > Access: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513199  417097 command_runner.go:130] > Modify: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513204  417097 command_runner.go:130] > Change: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513211  417097 command_runner.go:130] >  Birth: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513254  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:10:37.518621  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.518795  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:10:37.524058  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.524222  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:10:37.529621  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.529676  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:10:37.535002  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.535047  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:10:37.540275  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.540337  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:10:37.545435  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.545637  417097 kubeadm.go:392] StartCluster: {Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:10:37.545767  417097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:10:37.545833  417097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:10:37.582395  417097 command_runner.go:130] > d7ea51256edb36e89400017a915aa39e52bc2edb76f0d7e2ce71d2a2409dcdf4
	I1030 19:10:37.582429  417097 command_runner.go:130] > c015d554fd62507a43f20864108e4d45332f484a315a410839993edfd140f747
	I1030 19:10:37.582437  417097 command_runner.go:130] > 76044db673948f4d099ea5546b6aecc8d2bc9689f6622bc97ef8d8be31651687
	I1030 19:10:37.582447  417097 command_runner.go:130] > 21cab845b533c8720ff4411c06a91fe69a928684f4f0863a6063c6f41c268291
	I1030 19:10:37.582456  417097 command_runner.go:130] > 264f7e0c37ee81544595fc9dd70dce40503b741f0e5043ad55f1c1a23554f78d
	I1030 19:10:37.582465  417097 command_runner.go:130] > c0d05b91dab5b163ce79a278da80f7a0d70f3e267a1ca686b99bff1f77f7761d
	I1030 19:10:37.582478  417097 command_runner.go:130] > 7958cff51a7a63767038880bf9546bb5bbc44c8c92de409213d2841a70aa64da
	I1030 19:10:37.582506  417097 command_runner.go:130] > 0a43a1830349e1340a3ffc7d129797e44386b56973110596507497fb62727406
	I1030 19:10:37.582535  417097 cri.go:89] found id: "d7ea51256edb36e89400017a915aa39e52bc2edb76f0d7e2ce71d2a2409dcdf4"
	I1030 19:10:37.582546  417097 cri.go:89] found id: "c015d554fd62507a43f20864108e4d45332f484a315a410839993edfd140f747"
	I1030 19:10:37.582554  417097 cri.go:89] found id: "76044db673948f4d099ea5546b6aecc8d2bc9689f6622bc97ef8d8be31651687"
	I1030 19:10:37.582559  417097 cri.go:89] found id: "21cab845b533c8720ff4411c06a91fe69a928684f4f0863a6063c6f41c268291"
	I1030 19:10:37.582565  417097 cri.go:89] found id: "264f7e0c37ee81544595fc9dd70dce40503b741f0e5043ad55f1c1a23554f78d"
	I1030 19:10:37.582575  417097 cri.go:89] found id: "c0d05b91dab5b163ce79a278da80f7a0d70f3e267a1ca686b99bff1f77f7761d"
	I1030 19:10:37.582580  417097 cri.go:89] found id: "7958cff51a7a63767038880bf9546bb5bbc44c8c92de409213d2841a70aa64da"
	I1030 19:10:37.582583  417097 cri.go:89] found id: "0a43a1830349e1340a3ffc7d129797e44386b56973110596507497fb62727406"
	I1030 19:10:37.582585  417097 cri.go:89] found id: ""
	I1030 19:10:37.582627  417097 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-743795 -n multinode-743795
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-743795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (328.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 stop
E1030 19:13:17.245806  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-743795 stop: exit status 82 (2m0.476860763s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-743795-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-743795 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-743795 status: (18.693311145s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr: (3.392048704s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-743795 -n multinode-743795
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-743795 logs -n 25: (2.061438518s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795:/home/docker/cp-test_multinode-743795-m02_multinode-743795.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795 sudo cat                                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m02_multinode-743795.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03:/home/docker/cp-test_multinode-743795-m02_multinode-743795-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795-m03 sudo cat                                   | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m02_multinode-743795-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp testdata/cp-test.txt                                                | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1456195063/001/cp-test_multinode-743795-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795:/home/docker/cp-test_multinode-743795-m03_multinode-743795.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795 sudo cat                                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m03_multinode-743795.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt                       | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02:/home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795-m02 sudo cat                                   | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-743795 node stop m03                                                          | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	| node    | multinode-743795 node start                                                             | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-743795                                                                | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:07 UTC |                     |
	| stop    | -p multinode-743795                                                                     | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:07 UTC |                     |
	| start   | -p multinode-743795                                                                     | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:09 UTC | 30 Oct 24 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-743795                                                                | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:12 UTC |                     |
	| node    | multinode-743795 node delete                                                            | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:12 UTC | 30 Oct 24 19:12 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-743795 stop                                                                   | multinode-743795 | jenkins | v1.34.0 | 30 Oct 24 19:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:09:03
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:09:03.346400  417097 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:09:03.346643  417097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:09:03.346652  417097 out.go:358] Setting ErrFile to fd 2...
	I1030 19:09:03.346657  417097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:09:03.346856  417097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:09:03.347388  417097 out.go:352] Setting JSON to false
	I1030 19:09:03.348384  417097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10286,"bootTime":1730305057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:09:03.348492  417097 start.go:139] virtualization: kvm guest
	I1030 19:09:03.351827  417097 out.go:177] * [multinode-743795] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:09:03.353718  417097 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:09:03.353738  417097 notify.go:220] Checking for updates...
	I1030 19:09:03.357035  417097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:09:03.358683  417097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:09:03.360103  417097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:09:03.361495  417097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:09:03.362932  417097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:09:03.364573  417097 config.go:182] Loaded profile config "multinode-743795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:09:03.364694  417097 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:09:03.365214  417097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:09:03.365276  417097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:09:03.381510  417097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I1030 19:09:03.382091  417097 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:09:03.382805  417097 main.go:141] libmachine: Using API Version  1
	I1030 19:09:03.382831  417097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:09:03.383221  417097 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:09:03.383457  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:09:03.419252  417097 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:09:03.420656  417097 start.go:297] selected driver: kvm2
	I1030 19:09:03.420672  417097 start.go:901] validating driver "kvm2" against &{Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:09:03.420863  417097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:09:03.421296  417097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:09:03.421386  417097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:09:03.436624  417097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:09:03.437321  417097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:09:03.437356  417097 cni.go:84] Creating CNI manager for ""
	I1030 19:09:03.437424  417097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1030 19:09:03.437505  417097 start.go:340] cluster config:
	{Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-743795 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:09:03.437649  417097 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:09:03.439589  417097 out.go:177] * Starting "multinode-743795" primary control-plane node in "multinode-743795" cluster
	I1030 19:09:03.440977  417097 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:09:03.441046  417097 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 19:09:03.441066  417097 cache.go:56] Caching tarball of preloaded images
	I1030 19:09:03.441153  417097 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:09:03.441191  417097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 19:09:03.441330  417097 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/config.json ...
	I1030 19:09:03.441589  417097 start.go:360] acquireMachinesLock for multinode-743795: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:09:03.441643  417097 start.go:364] duration metric: took 27.33µs to acquireMachinesLock for "multinode-743795"
	I1030 19:09:03.441657  417097 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:09:03.441670  417097 fix.go:54] fixHost starting: 
	I1030 19:09:03.441918  417097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:09:03.441956  417097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:09:03.456427  417097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37897
	I1030 19:09:03.456802  417097 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:09:03.457323  417097 main.go:141] libmachine: Using API Version  1
	I1030 19:09:03.457342  417097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:09:03.457664  417097 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:09:03.457897  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:09:03.458087  417097 main.go:141] libmachine: (multinode-743795) Calling .GetState
	I1030 19:09:03.459585  417097 fix.go:112] recreateIfNeeded on multinode-743795: state=Running err=<nil>
	W1030 19:09:03.459614  417097 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:09:03.461462  417097 out.go:177] * Updating the running kvm2 "multinode-743795" VM ...
	I1030 19:09:03.462836  417097 machine.go:93] provisionDockerMachine start ...
	I1030 19:09:03.462854  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:09:03.463073  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.465540  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.465944  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.465976  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.466108  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:03.466292  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.466423  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.466570  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:03.466747  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:03.467010  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:03.467028  417097 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:09:03.588207  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-743795
	
	I1030 19:09:03.588245  417097 main.go:141] libmachine: (multinode-743795) Calling .GetMachineName
	I1030 19:09:03.588546  417097 buildroot.go:166] provisioning hostname "multinode-743795"
	I1030 19:09:03.588582  417097 main.go:141] libmachine: (multinode-743795) Calling .GetMachineName
	I1030 19:09:03.588762  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.591619  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.591993  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.592030  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.592153  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:03.592324  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.592671  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.592823  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:03.592967  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:03.593156  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:03.593168  417097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-743795 && echo "multinode-743795" | sudo tee /etc/hostname
	I1030 19:09:03.721845  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-743795
	
	I1030 19:09:03.721881  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.724764  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.725119  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.725137  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.725341  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:03.725509  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.725662  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:03.725829  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:03.726094  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:03.726316  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:03.726347  417097 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-743795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-743795/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-743795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:09:03.843414  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:09:03.843448  417097 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:09:03.843481  417097 buildroot.go:174] setting up certificates
	I1030 19:09:03.843493  417097 provision.go:84] configureAuth start
	I1030 19:09:03.843505  417097 main.go:141] libmachine: (multinode-743795) Calling .GetMachineName
	I1030 19:09:03.843811  417097 main.go:141] libmachine: (multinode-743795) Calling .GetIP
	I1030 19:09:03.846465  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.846928  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.846955  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.847100  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:03.849287  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.849621  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:03.849637  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:03.849771  417097 provision.go:143] copyHostCerts
	I1030 19:09:03.849800  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:09:03.849843  417097 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:09:03.849858  417097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:09:03.849924  417097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:09:03.850021  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:09:03.850040  417097 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:09:03.850047  417097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:09:03.850072  417097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:09:03.850130  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:09:03.850146  417097 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:09:03.850153  417097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:09:03.850173  417097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:09:03.850233  417097 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.multinode-743795 san=[127.0.0.1 192.168.39.241 localhost minikube multinode-743795]
	I1030 19:09:04.095389  417097 provision.go:177] copyRemoteCerts
	I1030 19:09:04.095453  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:09:04.095480  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:04.098235  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.098721  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:04.098754  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.098937  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:04.099190  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:04.099339  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:04.099477  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:09:04.190390  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 19:09:04.190460  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:09:04.215582  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 19:09:04.215649  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:09:04.239505  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 19:09:04.239579  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1030 19:09:04.264899  417097 provision.go:87] duration metric: took 421.391175ms to configureAuth
	I1030 19:09:04.264931  417097 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:09:04.265222  417097 config.go:182] Loaded profile config "multinode-743795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:09:04.265314  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:09:04.268310  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.268688  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:09:04.268711  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:09:04.268844  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:09:04.269124  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:04.269269  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:09:04.269421  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:09:04.269564  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:09:04.269737  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:09:04.269750  417097 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:10:34.945481  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:10:34.945513  417097 machine.go:96] duration metric: took 1m31.482664425s to provisionDockerMachine
	I1030 19:10:34.945534  417097 start.go:293] postStartSetup for "multinode-743795" (driver="kvm2")
	I1030 19:10:34.945558  417097 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:10:34.945585  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:34.945878  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:10:34.945914  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:34.949014  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:34.949476  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:34.949526  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:34.949649  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:34.949867  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:34.950034  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:34.950203  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:10:35.038497  417097 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:10:35.042561  417097 command_runner.go:130] > NAME=Buildroot
	I1030 19:10:35.042577  417097 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1030 19:10:35.042582  417097 command_runner.go:130] > ID=buildroot
	I1030 19:10:35.042587  417097 command_runner.go:130] > VERSION_ID=2023.02.9
	I1030 19:10:35.042602  417097 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1030 19:10:35.042885  417097 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:10:35.042907  417097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:10:35.042965  417097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:10:35.043060  417097 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:10:35.043070  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /etc/ssl/certs/3891442.pem
	I1030 19:10:35.043164  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:10:35.052923  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:10:35.077304  417097 start.go:296] duration metric: took 131.745248ms for postStartSetup
	I1030 19:10:35.077369  417097 fix.go:56] duration metric: took 1m31.635698193s for fixHost
	I1030 19:10:35.077399  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:35.080248  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.080717  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.080749  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.080936  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:35.081224  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.081395  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.081514  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:35.081668  417097 main.go:141] libmachine: Using SSH client type: native
	I1030 19:10:35.081853  417097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I1030 19:10:35.081866  417097 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:10:35.195307  417097 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730315435.164013525
	
	I1030 19:10:35.195343  417097 fix.go:216] guest clock: 1730315435.164013525
	I1030 19:10:35.195355  417097 fix.go:229] Guest: 2024-10-30 19:10:35.164013525 +0000 UTC Remote: 2024-10-30 19:10:35.077375603 +0000 UTC m=+91.772522355 (delta=86.637922ms)
	I1030 19:10:35.195387  417097 fix.go:200] guest clock delta is within tolerance: 86.637922ms
	I1030 19:10:35.195396  417097 start.go:83] releasing machines lock for "multinode-743795", held for 1m31.75374356s
	I1030 19:10:35.195426  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.195710  417097 main.go:141] libmachine: (multinode-743795) Calling .GetIP
	I1030 19:10:35.198527  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.198976  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.199008  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.199081  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.199704  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.199893  417097 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:10:35.199991  417097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:10:35.200047  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:35.200092  417097 ssh_runner.go:195] Run: cat /version.json
	I1030 19:10:35.200120  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:10:35.202729  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.202872  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.203137  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.203165  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.203274  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:35.203398  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:35.203418  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.203418  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:35.203559  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:35.203637  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:10:35.203830  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:10:35.203846  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:10:35.204002  417097 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:10:35.204147  417097 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:10:35.306131  417097 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1030 19:10:35.306227  417097 command_runner.go:130] > {"iso_version": "v1.34.0-1730282777-19883", "kicbase_version": "v0.0.45-1730110049-19872", "minikube_version": "v1.34.0", "commit": "7738213fbe7cb3f4867f3e3b534798700ea0e3fb"}
	I1030 19:10:35.306376  417097 ssh_runner.go:195] Run: systemctl --version
	I1030 19:10:35.312360  417097 command_runner.go:130] > systemd 252 (252)
	I1030 19:10:35.312420  417097 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1030 19:10:35.312532  417097 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:10:35.481134  417097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1030 19:10:35.489226  417097 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1030 19:10:35.489576  417097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:10:35.489644  417097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:10:35.501005  417097 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 19:10:35.501036  417097 start.go:495] detecting cgroup driver to use...
	I1030 19:10:35.501137  417097 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:10:35.517939  417097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:10:35.531469  417097 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:10:35.531539  417097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:10:35.544690  417097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:10:35.558315  417097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:10:35.714305  417097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:10:35.870178  417097 docker.go:233] disabling docker service ...
	I1030 19:10:35.870267  417097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:10:35.891238  417097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:10:35.905412  417097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:10:36.049787  417097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:10:36.195346  417097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:10:36.208762  417097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:10:36.227419  417097 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1030 19:10:36.227460  417097 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:10:36.227515  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.238017  417097 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:10:36.238085  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.248499  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.258812  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.268854  417097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:10:36.279085  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.289679  417097 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.300625  417097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:10:36.311289  417097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:10:36.321350  417097 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1030 19:10:36.321515  417097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:10:36.330963  417097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:10:36.467659  417097 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:10:36.662324  417097 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:10:36.662410  417097 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:10:36.667598  417097 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1030 19:10:36.667627  417097 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1030 19:10:36.667637  417097 command_runner.go:130] > Device: 0,22	Inode: 1265        Links: 1
	I1030 19:10:36.667648  417097 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 19:10:36.667656  417097 command_runner.go:130] > Access: 2024-10-30 19:10:36.529385148 +0000
	I1030 19:10:36.667666  417097 command_runner.go:130] > Modify: 2024-10-30 19:10:36.529385148 +0000
	I1030 19:10:36.667674  417097 command_runner.go:130] > Change: 2024-10-30 19:10:36.529385148 +0000
	I1030 19:10:36.667681  417097 command_runner.go:130] >  Birth: -
	I1030 19:10:36.668009  417097 start.go:563] Will wait 60s for crictl version
	I1030 19:10:36.668074  417097 ssh_runner.go:195] Run: which crictl
	I1030 19:10:36.671769  417097 command_runner.go:130] > /usr/bin/crictl
	I1030 19:10:36.671981  417097 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:10:36.709417  417097 command_runner.go:130] > Version:  0.1.0
	I1030 19:10:36.709446  417097 command_runner.go:130] > RuntimeName:  cri-o
	I1030 19:10:36.709453  417097 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1030 19:10:36.709460  417097 command_runner.go:130] > RuntimeApiVersion:  v1
	I1030 19:10:36.710680  417097 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:10:36.710763  417097 ssh_runner.go:195] Run: crio --version
	I1030 19:10:36.737232  417097 command_runner.go:130] > crio version 1.29.1
	I1030 19:10:36.737253  417097 command_runner.go:130] > Version:        1.29.1
	I1030 19:10:36.737260  417097 command_runner.go:130] > GitCommit:      unknown
	I1030 19:10:36.737264  417097 command_runner.go:130] > GitCommitDate:  unknown
	I1030 19:10:36.737268  417097 command_runner.go:130] > GitTreeState:   clean
	I1030 19:10:36.737276  417097 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1030 19:10:36.737282  417097 command_runner.go:130] > GoVersion:      go1.21.6
	I1030 19:10:36.737289  417097 command_runner.go:130] > Compiler:       gc
	I1030 19:10:36.737297  417097 command_runner.go:130] > Platform:       linux/amd64
	I1030 19:10:36.737303  417097 command_runner.go:130] > Linkmode:       dynamic
	I1030 19:10:36.737314  417097 command_runner.go:130] > BuildTags:      
	I1030 19:10:36.737320  417097 command_runner.go:130] >   containers_image_ostree_stub
	I1030 19:10:36.737326  417097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1030 19:10:36.737330  417097 command_runner.go:130] >   btrfs_noversion
	I1030 19:10:36.737337  417097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1030 19:10:36.737342  417097 command_runner.go:130] >   libdm_no_deferred_remove
	I1030 19:10:36.737347  417097 command_runner.go:130] >   seccomp
	I1030 19:10:36.737352  417097 command_runner.go:130] > LDFlags:          unknown
	I1030 19:10:36.737359  417097 command_runner.go:130] > SeccompEnabled:   true
	I1030 19:10:36.737363  417097 command_runner.go:130] > AppArmorEnabled:  false
	I1030 19:10:36.738360  417097 ssh_runner.go:195] Run: crio --version
	I1030 19:10:36.766906  417097 command_runner.go:130] > crio version 1.29.1
	I1030 19:10:36.766925  417097 command_runner.go:130] > Version:        1.29.1
	I1030 19:10:36.766930  417097 command_runner.go:130] > GitCommit:      unknown
	I1030 19:10:36.766934  417097 command_runner.go:130] > GitCommitDate:  unknown
	I1030 19:10:36.766938  417097 command_runner.go:130] > GitTreeState:   clean
	I1030 19:10:36.766944  417097 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1030 19:10:36.766948  417097 command_runner.go:130] > GoVersion:      go1.21.6
	I1030 19:10:36.766952  417097 command_runner.go:130] > Compiler:       gc
	I1030 19:10:36.766957  417097 command_runner.go:130] > Platform:       linux/amd64
	I1030 19:10:36.766961  417097 command_runner.go:130] > Linkmode:       dynamic
	I1030 19:10:36.767001  417097 command_runner.go:130] > BuildTags:      
	I1030 19:10:36.767019  417097 command_runner.go:130] >   containers_image_ostree_stub
	I1030 19:10:36.767023  417097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1030 19:10:36.767032  417097 command_runner.go:130] >   btrfs_noversion
	I1030 19:10:36.767045  417097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1030 19:10:36.767049  417097 command_runner.go:130] >   libdm_no_deferred_remove
	I1030 19:10:36.767054  417097 command_runner.go:130] >   seccomp
	I1030 19:10:36.767058  417097 command_runner.go:130] > LDFlags:          unknown
	I1030 19:10:36.767063  417097 command_runner.go:130] > SeccompEnabled:   true
	I1030 19:10:36.767067  417097 command_runner.go:130] > AppArmorEnabled:  false
	I1030 19:10:36.770246  417097 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:10:36.771781  417097 main.go:141] libmachine: (multinode-743795) Calling .GetIP
	I1030 19:10:36.774310  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:36.774678  417097 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:10:36.774707  417097 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:10:36.774896  417097 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:10:36.779096  417097 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1030 19:10:36.779217  417097 kubeadm.go:883] updating cluster {Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:10:36.779383  417097 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:10:36.779426  417097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:10:36.820444  417097 command_runner.go:130] > {
	I1030 19:10:36.820477  417097 command_runner.go:130] >   "images": [
	I1030 19:10:36.820484  417097 command_runner.go:130] >     {
	I1030 19:10:36.820496  417097 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1030 19:10:36.820504  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820513  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1030 19:10:36.820519  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820525  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820538  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1030 19:10:36.820553  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1030 19:10:36.820558  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820567  417097 command_runner.go:130] >       "size": "94965812",
	I1030 19:10:36.820573  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.820583  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.820593  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.820600  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.820609  417097 command_runner.go:130] >     },
	I1030 19:10:36.820614  417097 command_runner.go:130] >     {
	I1030 19:10:36.820624  417097 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1030 19:10:36.820631  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820639  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1030 19:10:36.820648  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820655  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820670  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1030 19:10:36.820683  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1030 19:10:36.820692  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820699  417097 command_runner.go:130] >       "size": "94958644",
	I1030 19:10:36.820708  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.820729  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.820740  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.820747  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.820760  417097 command_runner.go:130] >     },
	I1030 19:10:36.820768  417097 command_runner.go:130] >     {
	I1030 19:10:36.820781  417097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1030 19:10:36.820790  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820802  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1030 19:10:36.820811  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820821  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820835  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1030 19:10:36.820849  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1030 19:10:36.820858  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820867  417097 command_runner.go:130] >       "size": "1363676",
	I1030 19:10:36.820876  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.820885  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.820894  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.820903  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.820913  417097 command_runner.go:130] >     },
	I1030 19:10:36.820920  417097 command_runner.go:130] >     {
	I1030 19:10:36.820933  417097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1030 19:10:36.820942  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.820953  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1030 19:10:36.820961  417097 command_runner.go:130] >       ],
	I1030 19:10:36.820967  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.820977  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1030 19:10:36.820994  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1030 19:10:36.821002  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821008  417097 command_runner.go:130] >       "size": "31470524",
	I1030 19:10:36.821013  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.821019  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821023  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821030  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821033  417097 command_runner.go:130] >     },
	I1030 19:10:36.821037  417097 command_runner.go:130] >     {
	I1030 19:10:36.821042  417097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1030 19:10:36.821053  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821060  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1030 19:10:36.821064  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821068  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821077  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1030 19:10:36.821086  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1030 19:10:36.821092  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821096  417097 command_runner.go:130] >       "size": "63273227",
	I1030 19:10:36.821100  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.821106  417097 command_runner.go:130] >       "username": "nonroot",
	I1030 19:10:36.821111  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821117  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821120  417097 command_runner.go:130] >     },
	I1030 19:10:36.821126  417097 command_runner.go:130] >     {
	I1030 19:10:36.821132  417097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1030 19:10:36.821138  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821142  417097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1030 19:10:36.821148  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821153  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821162  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1030 19:10:36.821170  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1030 19:10:36.821176  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821182  417097 command_runner.go:130] >       "size": "149009664",
	I1030 19:10:36.821188  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821192  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821198  417097 command_runner.go:130] >       },
	I1030 19:10:36.821202  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821208  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821212  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821218  417097 command_runner.go:130] >     },
	I1030 19:10:36.821221  417097 command_runner.go:130] >     {
	I1030 19:10:36.821228  417097 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1030 19:10:36.821234  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821244  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1030 19:10:36.821271  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821285  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821292  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1030 19:10:36.821299  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1030 19:10:36.821308  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821315  417097 command_runner.go:130] >       "size": "95274464",
	I1030 19:10:36.821319  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821325  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821328  417097 command_runner.go:130] >       },
	I1030 19:10:36.821334  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821339  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821345  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821349  417097 command_runner.go:130] >     },
	I1030 19:10:36.821354  417097 command_runner.go:130] >     {
	I1030 19:10:36.821361  417097 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1030 19:10:36.821367  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821373  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1030 19:10:36.821378  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821382  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821405  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1030 19:10:36.821415  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1030 19:10:36.821421  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821425  417097 command_runner.go:130] >       "size": "89474374",
	I1030 19:10:36.821431  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821435  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821440  417097 command_runner.go:130] >       },
	I1030 19:10:36.821445  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821448  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821452  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821456  417097 command_runner.go:130] >     },
	I1030 19:10:36.821459  417097 command_runner.go:130] >     {
	I1030 19:10:36.821465  417097 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1030 19:10:36.821473  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821478  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1030 19:10:36.821482  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821486  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821492  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1030 19:10:36.821501  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1030 19:10:36.821507  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821512  417097 command_runner.go:130] >       "size": "92783513",
	I1030 19:10:36.821517  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.821521  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821529  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821533  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821539  417097 command_runner.go:130] >     },
	I1030 19:10:36.821550  417097 command_runner.go:130] >     {
	I1030 19:10:36.821561  417097 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1030 19:10:36.821567  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821572  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1030 19:10:36.821578  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821582  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821591  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1030 19:10:36.821601  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1030 19:10:36.821606  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821611  417097 command_runner.go:130] >       "size": "68457798",
	I1030 19:10:36.821616  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821620  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.821626  417097 command_runner.go:130] >       },
	I1030 19:10:36.821629  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821633  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821640  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.821643  417097 command_runner.go:130] >     },
	I1030 19:10:36.821647  417097 command_runner.go:130] >     {
	I1030 19:10:36.821653  417097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1030 19:10:36.821659  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.821668  417097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1030 19:10:36.821675  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821682  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.821695  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1030 19:10:36.821709  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1030 19:10:36.821717  417097 command_runner.go:130] >       ],
	I1030 19:10:36.821726  417097 command_runner.go:130] >       "size": "742080",
	I1030 19:10:36.821732  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.821741  417097 command_runner.go:130] >         "value": "65535"
	I1030 19:10:36.821748  417097 command_runner.go:130] >       },
	I1030 19:10:36.821755  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.821763  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.821772  417097 command_runner.go:130] >       "pinned": true
	I1030 19:10:36.821778  417097 command_runner.go:130] >     }
	I1030 19:10:36.821785  417097 command_runner.go:130] >   ]
	I1030 19:10:36.821789  417097 command_runner.go:130] > }
	I1030 19:10:36.821996  417097 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:10:36.822008  417097 crio.go:433] Images already preloaded, skipping extraction
	I1030 19:10:36.822057  417097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:10:36.855958  417097 command_runner.go:130] > {
	I1030 19:10:36.855984  417097 command_runner.go:130] >   "images": [
	I1030 19:10:36.855988  417097 command_runner.go:130] >     {
	I1030 19:10:36.855996  417097 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1030 19:10:36.856002  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856008  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1030 19:10:36.856012  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856018  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856029  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1030 19:10:36.856037  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1030 19:10:36.856040  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856044  417097 command_runner.go:130] >       "size": "94965812",
	I1030 19:10:36.856051  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856055  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856064  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856068  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856074  417097 command_runner.go:130] >     },
	I1030 19:10:36.856077  417097 command_runner.go:130] >     {
	I1030 19:10:36.856083  417097 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1030 19:10:36.856087  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856092  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1030 19:10:36.856098  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856109  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856119  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1030 19:10:36.856126  417097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1030 19:10:36.856132  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856137  417097 command_runner.go:130] >       "size": "94958644",
	I1030 19:10:36.856143  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856153  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856158  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856162  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856168  417097 command_runner.go:130] >     },
	I1030 19:10:36.856171  417097 command_runner.go:130] >     {
	I1030 19:10:36.856177  417097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1030 19:10:36.856181  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856187  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1030 19:10:36.856193  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856196  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856203  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1030 19:10:36.856210  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1030 19:10:36.856216  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856221  417097 command_runner.go:130] >       "size": "1363676",
	I1030 19:10:36.856225  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856231  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856235  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856239  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856243  417097 command_runner.go:130] >     },
	I1030 19:10:36.856246  417097 command_runner.go:130] >     {
	I1030 19:10:36.856252  417097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1030 19:10:36.856259  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856269  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1030 19:10:36.856275  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856279  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856287  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1030 19:10:36.856301  417097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1030 19:10:36.856310  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856315  417097 command_runner.go:130] >       "size": "31470524",
	I1030 19:10:36.856321  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856325  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856332  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856336  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856342  417097 command_runner.go:130] >     },
	I1030 19:10:36.856345  417097 command_runner.go:130] >     {
	I1030 19:10:36.856351  417097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1030 19:10:36.856357  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856362  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1030 19:10:36.856368  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856372  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856380  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1030 19:10:36.856391  417097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1030 19:10:36.856396  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856400  417097 command_runner.go:130] >       "size": "63273227",
	I1030 19:10:36.856404  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856408  417097 command_runner.go:130] >       "username": "nonroot",
	I1030 19:10:36.856412  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856418  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856422  417097 command_runner.go:130] >     },
	I1030 19:10:36.856427  417097 command_runner.go:130] >     {
	I1030 19:10:36.856434  417097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1030 19:10:36.856440  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856445  417097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1030 19:10:36.856451  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856455  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856465  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1030 19:10:36.856475  417097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1030 19:10:36.856478  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856483  417097 command_runner.go:130] >       "size": "149009664",
	I1030 19:10:36.856489  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856493  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856498  417097 command_runner.go:130] >       },
	I1030 19:10:36.856502  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856507  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856511  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856514  417097 command_runner.go:130] >     },
	I1030 19:10:36.856518  417097 command_runner.go:130] >     {
	I1030 19:10:36.856524  417097 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1030 19:10:36.856528  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856533  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1030 19:10:36.856537  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856542  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856550  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1030 19:10:36.856559  417097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1030 19:10:36.856562  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856567  417097 command_runner.go:130] >       "size": "95274464",
	I1030 19:10:36.856571  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856575  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856578  417097 command_runner.go:130] >       },
	I1030 19:10:36.856582  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856586  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856590  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856594  417097 command_runner.go:130] >     },
	I1030 19:10:36.856599  417097 command_runner.go:130] >     {
	I1030 19:10:36.856605  417097 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1030 19:10:36.856609  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856616  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1030 19:10:36.856621  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856625  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856638  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1030 19:10:36.856647  417097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1030 19:10:36.856651  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856657  417097 command_runner.go:130] >       "size": "89474374",
	I1030 19:10:36.856662  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856668  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856671  417097 command_runner.go:130] >       },
	I1030 19:10:36.856677  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856681  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856687  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856690  417097 command_runner.go:130] >     },
	I1030 19:10:36.856694  417097 command_runner.go:130] >     {
	I1030 19:10:36.856700  417097 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1030 19:10:36.856706  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856711  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1030 19:10:36.856714  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856718  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856725  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1030 19:10:36.856734  417097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1030 19:10:36.856738  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856743  417097 command_runner.go:130] >       "size": "92783513",
	I1030 19:10:36.856749  417097 command_runner.go:130] >       "uid": null,
	I1030 19:10:36.856753  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856757  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856763  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856766  417097 command_runner.go:130] >     },
	I1030 19:10:36.856772  417097 command_runner.go:130] >     {
	I1030 19:10:36.856778  417097 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1030 19:10:36.856785  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856789  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1030 19:10:36.856795  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856799  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856808  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1030 19:10:36.856817  417097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1030 19:10:36.856821  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856825  417097 command_runner.go:130] >       "size": "68457798",
	I1030 19:10:36.856830  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856835  417097 command_runner.go:130] >         "value": "0"
	I1030 19:10:36.856841  417097 command_runner.go:130] >       },
	I1030 19:10:36.856845  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856849  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856852  417097 command_runner.go:130] >       "pinned": false
	I1030 19:10:36.856856  417097 command_runner.go:130] >     },
	I1030 19:10:36.856860  417097 command_runner.go:130] >     {
	I1030 19:10:36.856866  417097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1030 19:10:36.856873  417097 command_runner.go:130] >       "repoTags": [
	I1030 19:10:36.856877  417097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1030 19:10:36.856881  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856885  417097 command_runner.go:130] >       "repoDigests": [
	I1030 19:10:36.856891  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1030 19:10:36.856899  417097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1030 19:10:36.856905  417097 command_runner.go:130] >       ],
	I1030 19:10:36.856909  417097 command_runner.go:130] >       "size": "742080",
	I1030 19:10:36.856913  417097 command_runner.go:130] >       "uid": {
	I1030 19:10:36.856916  417097 command_runner.go:130] >         "value": "65535"
	I1030 19:10:36.856920  417097 command_runner.go:130] >       },
	I1030 19:10:36.856924  417097 command_runner.go:130] >       "username": "",
	I1030 19:10:36.856929  417097 command_runner.go:130] >       "spec": null,
	I1030 19:10:36.856933  417097 command_runner.go:130] >       "pinned": true
	I1030 19:10:36.856939  417097 command_runner.go:130] >     }
	I1030 19:10:36.856942  417097 command_runner.go:130] >   ]
	I1030 19:10:36.856945  417097 command_runner.go:130] > }
	I1030 19:10:36.857072  417097 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:10:36.857084  417097 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:10:36.857092  417097 kubeadm.go:934] updating node { 192.168.39.241 8443 v1.31.2 crio true true} ...
	I1030 19:10:36.857223  417097 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-743795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:10:36.857332  417097 ssh_runner.go:195] Run: crio config
	I1030 19:10:36.898351  417097 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1030 19:10:36.898398  417097 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1030 19:10:36.898409  417097 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1030 19:10:36.898414  417097 command_runner.go:130] > #
	I1030 19:10:36.898425  417097 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1030 19:10:36.898435  417097 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1030 19:10:36.898448  417097 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1030 19:10:36.898466  417097 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1030 19:10:36.898477  417097 command_runner.go:130] > # reload'.
	I1030 19:10:36.898500  417097 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1030 19:10:36.898518  417097 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1030 19:10:36.898529  417097 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1030 19:10:36.898544  417097 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1030 19:10:36.898554  417097 command_runner.go:130] > [crio]
	I1030 19:10:36.898566  417097 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1030 19:10:36.898576  417097 command_runner.go:130] > # containers images, in this directory.
	I1030 19:10:36.898583  417097 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1030 19:10:36.898597  417097 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1030 19:10:36.898610  417097 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1030 19:10:36.898625  417097 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1030 19:10:36.898635  417097 command_runner.go:130] > # imagestore = ""
	I1030 19:10:36.898646  417097 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1030 19:10:36.898659  417097 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1030 19:10:36.898670  417097 command_runner.go:130] > storage_driver = "overlay"
	I1030 19:10:36.898683  417097 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1030 19:10:36.898694  417097 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1030 19:10:36.898701  417097 command_runner.go:130] > storage_option = [
	I1030 19:10:36.898712  417097 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1030 19:10:36.898719  417097 command_runner.go:130] > ]
	I1030 19:10:36.898731  417097 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1030 19:10:36.898745  417097 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1030 19:10:36.898755  417097 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1030 19:10:36.898766  417097 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1030 19:10:36.898779  417097 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1030 19:10:36.898790  417097 command_runner.go:130] > # always happen on a node reboot
	I1030 19:10:36.898801  417097 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1030 19:10:36.898821  417097 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1030 19:10:36.898831  417097 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1030 19:10:36.898838  417097 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1030 19:10:36.898847  417097 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1030 19:10:36.898861  417097 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1030 19:10:36.898877  417097 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1030 19:10:36.898887  417097 command_runner.go:130] > # internal_wipe = true
	I1030 19:10:36.898900  417097 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1030 19:10:36.898913  417097 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1030 19:10:36.898923  417097 command_runner.go:130] > # internal_repair = false
	I1030 19:10:36.898931  417097 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1030 19:10:36.898946  417097 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1030 19:10:36.898957  417097 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1030 19:10:36.898968  417097 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1030 19:10:36.898981  417097 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1030 19:10:36.898990  417097 command_runner.go:130] > [crio.api]
	I1030 19:10:36.898999  417097 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1030 19:10:36.899011  417097 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1030 19:10:36.899026  417097 command_runner.go:130] > # IP address on which the stream server will listen.
	I1030 19:10:36.899036  417097 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1030 19:10:36.899049  417097 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1030 19:10:36.899058  417097 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1030 19:10:36.899062  417097 command_runner.go:130] > # stream_port = "0"
	I1030 19:10:36.899073  417097 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1030 19:10:36.899083  417097 command_runner.go:130] > # stream_enable_tls = false
	I1030 19:10:36.899093  417097 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1030 19:10:36.899105  417097 command_runner.go:130] > # stream_idle_timeout = ""
	I1030 19:10:36.899116  417097 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1030 19:10:36.899130  417097 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1030 19:10:36.899138  417097 command_runner.go:130] > # minutes.
	I1030 19:10:36.899145  417097 command_runner.go:130] > # stream_tls_cert = ""
	I1030 19:10:36.899157  417097 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1030 19:10:36.899171  417097 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1030 19:10:36.899182  417097 command_runner.go:130] > # stream_tls_key = ""
	I1030 19:10:36.899192  417097 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1030 19:10:36.899205  417097 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1030 19:10:36.899224  417097 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1030 19:10:36.899233  417097 command_runner.go:130] > # stream_tls_ca = ""
	I1030 19:10:36.899245  417097 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1030 19:10:36.899254  417097 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1030 19:10:36.899263  417097 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1030 19:10:36.899274  417097 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1030 19:10:36.899288  417097 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1030 19:10:36.899297  417097 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1030 19:10:36.899308  417097 command_runner.go:130] > [crio.runtime]
	I1030 19:10:36.899320  417097 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1030 19:10:36.899331  417097 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1030 19:10:36.899339  417097 command_runner.go:130] > # "nofile=1024:2048"
	I1030 19:10:36.899346  417097 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1030 19:10:36.899355  417097 command_runner.go:130] > # default_ulimits = [
	I1030 19:10:36.899369  417097 command_runner.go:130] > # ]
	I1030 19:10:36.899380  417097 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1030 19:10:36.899390  417097 command_runner.go:130] > # no_pivot = false
	I1030 19:10:36.899399  417097 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1030 19:10:36.899413  417097 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1030 19:10:36.899424  417097 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1030 19:10:36.899437  417097 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1030 19:10:36.899454  417097 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1030 19:10:36.899468  417097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 19:10:36.899480  417097 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1030 19:10:36.899489  417097 command_runner.go:130] > # Cgroup setting for conmon
	I1030 19:10:36.899501  417097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1030 19:10:36.899510  417097 command_runner.go:130] > conmon_cgroup = "pod"
	I1030 19:10:36.899520  417097 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1030 19:10:36.899533  417097 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1030 19:10:36.899544  417097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 19:10:36.899553  417097 command_runner.go:130] > conmon_env = [
	I1030 19:10:36.899565  417097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 19:10:36.899577  417097 command_runner.go:130] > ]
	I1030 19:10:36.899588  417097 command_runner.go:130] > # Additional environment variables to set for all the
	I1030 19:10:36.899600  417097 command_runner.go:130] > # containers. These are overridden if set in the
	I1030 19:10:36.899612  417097 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1030 19:10:36.899621  417097 command_runner.go:130] > # default_env = [
	I1030 19:10:36.899626  417097 command_runner.go:130] > # ]
	I1030 19:10:36.899640  417097 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1030 19:10:36.899653  417097 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1030 19:10:36.899663  417097 command_runner.go:130] > # selinux = false
	I1030 19:10:36.899674  417097 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1030 19:10:36.899688  417097 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1030 19:10:36.899699  417097 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1030 19:10:36.899707  417097 command_runner.go:130] > # seccomp_profile = ""
	I1030 19:10:36.899718  417097 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1030 19:10:36.899732  417097 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1030 19:10:36.899744  417097 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1030 19:10:36.899754  417097 command_runner.go:130] > # which might increase security.
	I1030 19:10:36.899765  417097 command_runner.go:130] > # This option is currently deprecated,
	I1030 19:10:36.899777  417097 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1030 19:10:36.899787  417097 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1030 19:10:36.899801  417097 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1030 19:10:36.899814  417097 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1030 19:10:36.899828  417097 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1030 19:10:36.899841  417097 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1030 19:10:36.899852  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.899862  417097 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1030 19:10:36.899871  417097 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1030 19:10:36.899881  417097 command_runner.go:130] > # the cgroup blockio controller.
	I1030 19:10:36.899888  417097 command_runner.go:130] > # blockio_config_file = ""
	I1030 19:10:36.899901  417097 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1030 19:10:36.899909  417097 command_runner.go:130] > # blockio parameters.
	I1030 19:10:36.899919  417097 command_runner.go:130] > # blockio_reload = false
	I1030 19:10:36.899929  417097 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1030 19:10:36.899939  417097 command_runner.go:130] > # irqbalance daemon.
	I1030 19:10:36.899947  417097 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1030 19:10:36.899960  417097 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1030 19:10:36.899974  417097 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1030 19:10:36.899986  417097 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1030 19:10:36.899998  417097 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1030 19:10:36.900011  417097 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1030 19:10:36.900023  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.900032  417097 command_runner.go:130] > # rdt_config_file = ""
	I1030 19:10:36.900041  417097 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1030 19:10:36.900051  417097 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1030 19:10:36.900071  417097 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1030 19:10:36.900083  417097 command_runner.go:130] > # separate_pull_cgroup = ""
	I1030 19:10:36.900094  417097 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1030 19:10:36.900106  417097 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1030 19:10:36.900116  417097 command_runner.go:130] > # will be added.
	I1030 19:10:36.900123  417097 command_runner.go:130] > # default_capabilities = [
	I1030 19:10:36.900135  417097 command_runner.go:130] > # 	"CHOWN",
	I1030 19:10:36.900141  417097 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1030 19:10:36.900147  417097 command_runner.go:130] > # 	"FSETID",
	I1030 19:10:36.900156  417097 command_runner.go:130] > # 	"FOWNER",
	I1030 19:10:36.900163  417097 command_runner.go:130] > # 	"SETGID",
	I1030 19:10:36.900171  417097 command_runner.go:130] > # 	"SETUID",
	I1030 19:10:36.900175  417097 command_runner.go:130] > # 	"SETPCAP",
	I1030 19:10:36.900182  417097 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1030 19:10:36.900186  417097 command_runner.go:130] > # 	"KILL",
	I1030 19:10:36.900189  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900198  417097 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1030 19:10:36.900211  417097 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1030 19:10:36.900222  417097 command_runner.go:130] > # add_inheritable_capabilities = false
	I1030 19:10:36.900234  417097 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1030 19:10:36.900247  417097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 19:10:36.900257  417097 command_runner.go:130] > default_sysctls = [
	I1030 19:10:36.900265  417097 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1030 19:10:36.900272  417097 command_runner.go:130] > ]
	I1030 19:10:36.900280  417097 command_runner.go:130] > # List of devices on the host that a
	I1030 19:10:36.900292  417097 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1030 19:10:36.900299  417097 command_runner.go:130] > # allowed_devices = [
	I1030 19:10:36.900303  417097 command_runner.go:130] > # 	"/dev/fuse",
	I1030 19:10:36.900309  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900317  417097 command_runner.go:130] > # List of additional devices. specified as
	I1030 19:10:36.900332  417097 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1030 19:10:36.900345  417097 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1030 19:10:36.900361  417097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 19:10:36.900372  417097 command_runner.go:130] > # additional_devices = [
	I1030 19:10:36.900376  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900384  417097 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1030 19:10:36.900394  417097 command_runner.go:130] > # cdi_spec_dirs = [
	I1030 19:10:36.900401  417097 command_runner.go:130] > # 	"/etc/cdi",
	I1030 19:10:36.900410  417097 command_runner.go:130] > # 	"/var/run/cdi",
	I1030 19:10:36.900416  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900429  417097 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1030 19:10:36.900442  417097 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1030 19:10:36.900453  417097 command_runner.go:130] > # Defaults to false.
	I1030 19:10:36.900464  417097 command_runner.go:130] > # device_ownership_from_security_context = false
	I1030 19:10:36.900478  417097 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1030 19:10:36.900488  417097 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1030 19:10:36.900497  417097 command_runner.go:130] > # hooks_dir = [
	I1030 19:10:36.900505  417097 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1030 19:10:36.900513  417097 command_runner.go:130] > # ]
	I1030 19:10:36.900522  417097 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1030 19:10:36.900535  417097 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1030 19:10:36.900545  417097 command_runner.go:130] > # its default mounts from the following two files:
	I1030 19:10:36.900549  417097 command_runner.go:130] > #
	I1030 19:10:36.900557  417097 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1030 19:10:36.900593  417097 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1030 19:10:36.900607  417097 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1030 19:10:36.900612  417097 command_runner.go:130] > #
	I1030 19:10:36.900622  417097 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1030 19:10:36.900632  417097 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1030 19:10:36.900641  417097 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1030 19:10:36.900650  417097 command_runner.go:130] > #      only add mounts it finds in this file.
	I1030 19:10:36.900659  417097 command_runner.go:130] > #
	I1030 19:10:36.900667  417097 command_runner.go:130] > # default_mounts_file = ""
	I1030 19:10:36.900679  417097 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1030 19:10:36.900694  417097 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1030 19:10:36.900703  417097 command_runner.go:130] > pids_limit = 1024
	I1030 19:10:36.900713  417097 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1030 19:10:36.900727  417097 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1030 19:10:36.900740  417097 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1030 19:10:36.900757  417097 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1030 19:10:36.900766  417097 command_runner.go:130] > # log_size_max = -1
	I1030 19:10:36.900777  417097 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1030 19:10:36.900787  417097 command_runner.go:130] > # log_to_journald = false
	I1030 19:10:36.900800  417097 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1030 19:10:36.900807  417097 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1030 19:10:36.900815  417097 command_runner.go:130] > # Path to directory for container attach sockets.
	I1030 19:10:36.900827  417097 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1030 19:10:36.900839  417097 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1030 19:10:36.900848  417097 command_runner.go:130] > # bind_mount_prefix = ""
	I1030 19:10:36.900857  417097 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1030 19:10:36.900867  417097 command_runner.go:130] > # read_only = false
	I1030 19:10:36.900877  417097 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1030 19:10:36.900889  417097 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1030 19:10:36.900896  417097 command_runner.go:130] > # live configuration reload.
	I1030 19:10:36.900907  417097 command_runner.go:130] > # log_level = "info"
	I1030 19:10:36.900916  417097 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1030 19:10:36.900928  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.900934  417097 command_runner.go:130] > # log_filter = ""
	I1030 19:10:36.900944  417097 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1030 19:10:36.900955  417097 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1030 19:10:36.900965  417097 command_runner.go:130] > # separated by comma.
	I1030 19:10:36.900981  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.900991  417097 command_runner.go:130] > # uid_mappings = ""
	I1030 19:10:36.901001  417097 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1030 19:10:36.901013  417097 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1030 19:10:36.901024  417097 command_runner.go:130] > # separated by comma.
	I1030 19:10:36.901038  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.901047  417097 command_runner.go:130] > # gid_mappings = ""
	I1030 19:10:36.901060  417097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1030 19:10:36.901073  417097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 19:10:36.901087  417097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 19:10:36.901103  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.901114  417097 command_runner.go:130] > # minimum_mappable_uid = -1
	I1030 19:10:36.901127  417097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1030 19:10:36.901139  417097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 19:10:36.901159  417097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 19:10:36.901179  417097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1030 19:10:36.901190  417097 command_runner.go:130] > # minimum_mappable_gid = -1
	I1030 19:10:36.901200  417097 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1030 19:10:36.901214  417097 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1030 19:10:36.901226  417097 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1030 19:10:36.901235  417097 command_runner.go:130] > # ctr_stop_timeout = 30
	I1030 19:10:36.901245  417097 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1030 19:10:36.901255  417097 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1030 19:10:36.901260  417097 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1030 19:10:36.901270  417097 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1030 19:10:36.901274  417097 command_runner.go:130] > drop_infra_ctr = false
	I1030 19:10:36.901281  417097 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1030 19:10:36.901286  417097 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1030 19:10:36.901294  417097 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1030 19:10:36.901298  417097 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1030 19:10:36.901307  417097 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1030 19:10:36.901312  417097 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1030 19:10:36.901318  417097 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1030 19:10:36.901325  417097 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1030 19:10:36.901331  417097 command_runner.go:130] > # shared_cpuset = ""
	I1030 19:10:36.901337  417097 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1030 19:10:36.901344  417097 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1030 19:10:36.901349  417097 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1030 19:10:36.901361  417097 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1030 19:10:36.901369  417097 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1030 19:10:36.901375  417097 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1030 19:10:36.901383  417097 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1030 19:10:36.901387  417097 command_runner.go:130] > # enable_criu_support = false
	I1030 19:10:36.901394  417097 command_runner.go:130] > # Enable/disable the generation of the container,
	I1030 19:10:36.901400  417097 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1030 19:10:36.901405  417097 command_runner.go:130] > # enable_pod_events = false
	I1030 19:10:36.901411  417097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 19:10:36.901420  417097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 19:10:36.901425  417097 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1030 19:10:36.901431  417097 command_runner.go:130] > # default_runtime = "runc"
	I1030 19:10:36.901436  417097 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1030 19:10:36.901443  417097 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1030 19:10:36.901454  417097 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1030 19:10:36.901461  417097 command_runner.go:130] > # creation as a file is not desired either.
	I1030 19:10:36.901470  417097 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1030 19:10:36.901477  417097 command_runner.go:130] > # the hostname is being managed dynamically.
	I1030 19:10:36.901482  417097 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1030 19:10:36.901485  417097 command_runner.go:130] > # ]
	I1030 19:10:36.901491  417097 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1030 19:10:36.901500  417097 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1030 19:10:36.901506  417097 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1030 19:10:36.901513  417097 command_runner.go:130] > # Each entry in the table should follow the format:
	I1030 19:10:36.901516  417097 command_runner.go:130] > #
	I1030 19:10:36.901521  417097 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1030 19:10:36.901525  417097 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1030 19:10:36.901551  417097 command_runner.go:130] > # runtime_type = "oci"
	I1030 19:10:36.901558  417097 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1030 19:10:36.901562  417097 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1030 19:10:36.901569  417097 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1030 19:10:36.901574  417097 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1030 19:10:36.901578  417097 command_runner.go:130] > # monitor_env = []
	I1030 19:10:36.901583  417097 command_runner.go:130] > # privileged_without_host_devices = false
	I1030 19:10:36.901588  417097 command_runner.go:130] > # allowed_annotations = []
	I1030 19:10:36.901599  417097 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1030 19:10:36.901605  417097 command_runner.go:130] > # Where:
	I1030 19:10:36.901611  417097 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1030 19:10:36.901619  417097 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1030 19:10:36.901625  417097 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1030 19:10:36.901633  417097 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1030 19:10:36.901637  417097 command_runner.go:130] > #   in $PATH.
	I1030 19:10:36.901645  417097 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1030 19:10:36.901650  417097 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1030 19:10:36.901659  417097 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1030 19:10:36.901662  417097 command_runner.go:130] > #   state.
	I1030 19:10:36.901668  417097 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1030 19:10:36.901675  417097 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1030 19:10:36.901683  417097 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1030 19:10:36.901688  417097 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1030 19:10:36.901694  417097 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1030 19:10:36.901701  417097 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1030 19:10:36.901708  417097 command_runner.go:130] > #   The currently recognized values are:
	I1030 19:10:36.901714  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1030 19:10:36.901723  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1030 19:10:36.901729  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1030 19:10:36.901738  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1030 19:10:36.901745  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1030 19:10:36.901753  417097 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1030 19:10:36.901759  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1030 19:10:36.901768  417097 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1030 19:10:36.901775  417097 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1030 19:10:36.901783  417097 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1030 19:10:36.901787  417097 command_runner.go:130] > #   deprecated option "conmon".
	I1030 19:10:36.901794  417097 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1030 19:10:36.901801  417097 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1030 19:10:36.901808  417097 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1030 19:10:36.901815  417097 command_runner.go:130] > #   should be moved to the container's cgroup
	I1030 19:10:36.901821  417097 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1030 19:10:36.901828  417097 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1030 19:10:36.901834  417097 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1030 19:10:36.901841  417097 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1030 19:10:36.901844  417097 command_runner.go:130] > #
	I1030 19:10:36.901849  417097 command_runner.go:130] > # Using the seccomp notifier feature:
	I1030 19:10:36.901855  417097 command_runner.go:130] > #
	I1030 19:10:36.901860  417097 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1030 19:10:36.901868  417097 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1030 19:10:36.901872  417097 command_runner.go:130] > #
	I1030 19:10:36.901877  417097 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1030 19:10:36.901885  417097 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1030 19:10:36.901888  417097 command_runner.go:130] > #
	I1030 19:10:36.901896  417097 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1030 19:10:36.901900  417097 command_runner.go:130] > # feature.
	I1030 19:10:36.901903  417097 command_runner.go:130] > #
	I1030 19:10:36.901909  417097 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1030 19:10:36.901917  417097 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1030 19:10:36.901923  417097 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1030 19:10:36.901931  417097 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1030 19:10:36.901937  417097 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1030 19:10:36.901940  417097 command_runner.go:130] > #
	I1030 19:10:36.901945  417097 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1030 19:10:36.901952  417097 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1030 19:10:36.901956  417097 command_runner.go:130] > #
	I1030 19:10:36.901962  417097 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1030 19:10:36.901969  417097 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1030 19:10:36.901972  417097 command_runner.go:130] > #
	I1030 19:10:36.901978  417097 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1030 19:10:36.901985  417097 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1030 19:10:36.901989  417097 command_runner.go:130] > # limitation.
	I1030 19:10:36.901996  417097 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1030 19:10:36.902001  417097 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1030 19:10:36.902007  417097 command_runner.go:130] > runtime_type = "oci"
	I1030 19:10:36.902011  417097 command_runner.go:130] > runtime_root = "/run/runc"
	I1030 19:10:36.902015  417097 command_runner.go:130] > runtime_config_path = ""
	I1030 19:10:36.902022  417097 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1030 19:10:36.902026  417097 command_runner.go:130] > monitor_cgroup = "pod"
	I1030 19:10:36.902032  417097 command_runner.go:130] > monitor_exec_cgroup = ""
	I1030 19:10:36.902035  417097 command_runner.go:130] > monitor_env = [
	I1030 19:10:36.902042  417097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 19:10:36.902048  417097 command_runner.go:130] > ]
	I1030 19:10:36.902052  417097 command_runner.go:130] > privileged_without_host_devices = false
	I1030 19:10:36.902058  417097 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1030 19:10:36.902064  417097 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1030 19:10:36.902093  417097 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1030 19:10:36.902106  417097 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1030 19:10:36.902117  417097 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1030 19:10:36.902125  417097 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1030 19:10:36.902134  417097 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1030 19:10:36.902144  417097 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1030 19:10:36.902149  417097 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1030 19:10:36.902158  417097 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1030 19:10:36.902162  417097 command_runner.go:130] > # Example:
	I1030 19:10:36.902167  417097 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1030 19:10:36.902172  417097 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1030 19:10:36.902176  417097 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1030 19:10:36.902181  417097 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1030 19:10:36.902185  417097 command_runner.go:130] > # cpuset = 0
	I1030 19:10:36.902188  417097 command_runner.go:130] > # cpushares = "0-1"
	I1030 19:10:36.902191  417097 command_runner.go:130] > # Where:
	I1030 19:10:36.902196  417097 command_runner.go:130] > # The workload name is workload-type.
	I1030 19:10:36.902204  417097 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1030 19:10:36.902209  417097 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1030 19:10:36.902215  417097 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1030 19:10:36.902223  417097 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1030 19:10:36.902228  417097 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1030 19:10:36.902233  417097 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1030 19:10:36.902240  417097 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1030 19:10:36.902244  417097 command_runner.go:130] > # Default value is set to true
	I1030 19:10:36.902248  417097 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1030 19:10:36.902253  417097 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1030 19:10:36.902258  417097 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1030 19:10:36.902262  417097 command_runner.go:130] > # Default value is set to 'false'
	I1030 19:10:36.902266  417097 command_runner.go:130] > # disable_hostport_mapping = false
	I1030 19:10:36.902272  417097 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1030 19:10:36.902275  417097 command_runner.go:130] > #
	I1030 19:10:36.902280  417097 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1030 19:10:36.902285  417097 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1030 19:10:36.902291  417097 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1030 19:10:36.902297  417097 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1030 19:10:36.902302  417097 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1030 19:10:36.902306  417097 command_runner.go:130] > [crio.image]
	I1030 19:10:36.902311  417097 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1030 19:10:36.902316  417097 command_runner.go:130] > # default_transport = "docker://"
	I1030 19:10:36.902321  417097 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1030 19:10:36.902328  417097 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1030 19:10:36.902332  417097 command_runner.go:130] > # global_auth_file = ""
	I1030 19:10:36.902337  417097 command_runner.go:130] > # The image used to instantiate infra containers.
	I1030 19:10:36.902341  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.902348  417097 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1030 19:10:36.902354  417097 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1030 19:10:36.902362  417097 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1030 19:10:36.902367  417097 command_runner.go:130] > # This option supports live configuration reload.
	I1030 19:10:36.902370  417097 command_runner.go:130] > # pause_image_auth_file = ""
	I1030 19:10:36.902376  417097 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1030 19:10:36.902384  417097 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1030 19:10:36.902390  417097 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1030 19:10:36.902399  417097 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1030 19:10:36.902403  417097 command_runner.go:130] > # pause_command = "/pause"
	I1030 19:10:36.902409  417097 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1030 19:10:36.902415  417097 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1030 19:10:36.902421  417097 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1030 19:10:36.902427  417097 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1030 19:10:36.902433  417097 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1030 19:10:36.902440  417097 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1030 19:10:36.902444  417097 command_runner.go:130] > # pinned_images = [
	I1030 19:10:36.902447  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902453  417097 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1030 19:10:36.902462  417097 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1030 19:10:36.902470  417097 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1030 19:10:36.902476  417097 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1030 19:10:36.902494  417097 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1030 19:10:36.902502  417097 command_runner.go:130] > # signature_policy = ""
	I1030 19:10:36.902513  417097 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1030 19:10:36.902526  417097 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1030 19:10:36.902534  417097 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1030 19:10:36.902540  417097 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1030 19:10:36.902549  417097 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1030 19:10:36.902554  417097 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1030 19:10:36.902563  417097 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1030 19:10:36.902569  417097 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1030 19:10:36.902575  417097 command_runner.go:130] > # changing them here.
	I1030 19:10:36.902579  417097 command_runner.go:130] > # insecure_registries = [
	I1030 19:10:36.902583  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902588  417097 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1030 19:10:36.902594  417097 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1030 19:10:36.902599  417097 command_runner.go:130] > # image_volumes = "mkdir"
	I1030 19:10:36.902604  417097 command_runner.go:130] > # Temporary directory to use for storing big files
	I1030 19:10:36.902610  417097 command_runner.go:130] > # big_files_temporary_dir = ""
	I1030 19:10:36.902616  417097 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1030 19:10:36.902623  417097 command_runner.go:130] > # CNI plugins.
	I1030 19:10:36.902627  417097 command_runner.go:130] > [crio.network]
	I1030 19:10:36.902637  417097 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1030 19:10:36.902644  417097 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1030 19:10:36.902649  417097 command_runner.go:130] > # cni_default_network = ""
	I1030 19:10:36.902655  417097 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1030 19:10:36.902660  417097 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1030 19:10:36.902665  417097 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1030 19:10:36.902671  417097 command_runner.go:130] > # plugin_dirs = [
	I1030 19:10:36.902675  417097 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1030 19:10:36.902678  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902684  417097 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1030 19:10:36.902690  417097 command_runner.go:130] > [crio.metrics]
	I1030 19:10:36.902695  417097 command_runner.go:130] > # Globally enable or disable metrics support.
	I1030 19:10:36.902701  417097 command_runner.go:130] > enable_metrics = true
	I1030 19:10:36.902705  417097 command_runner.go:130] > # Specify enabled metrics collectors.
	I1030 19:10:36.902710  417097 command_runner.go:130] > # Per default all metrics are enabled.
	I1030 19:10:36.902718  417097 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1030 19:10:36.902724  417097 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1030 19:10:36.902731  417097 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1030 19:10:36.902735  417097 command_runner.go:130] > # metrics_collectors = [
	I1030 19:10:36.902741  417097 command_runner.go:130] > # 	"operations",
	I1030 19:10:36.902747  417097 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1030 19:10:36.902754  417097 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1030 19:10:36.902758  417097 command_runner.go:130] > # 	"operations_errors",
	I1030 19:10:36.902763  417097 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1030 19:10:36.902773  417097 command_runner.go:130] > # 	"image_pulls_by_name",
	I1030 19:10:36.902780  417097 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1030 19:10:36.902790  417097 command_runner.go:130] > # 	"image_pulls_failures",
	I1030 19:10:36.902795  417097 command_runner.go:130] > # 	"image_pulls_successes",
	I1030 19:10:36.902799  417097 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1030 19:10:36.902804  417097 command_runner.go:130] > # 	"image_layer_reuse",
	I1030 19:10:36.902810  417097 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1030 19:10:36.902815  417097 command_runner.go:130] > # 	"containers_oom_total",
	I1030 19:10:36.902821  417097 command_runner.go:130] > # 	"containers_oom",
	I1030 19:10:36.902826  417097 command_runner.go:130] > # 	"processes_defunct",
	I1030 19:10:36.902833  417097 command_runner.go:130] > # 	"operations_total",
	I1030 19:10:36.902841  417097 command_runner.go:130] > # 	"operations_latency_seconds",
	I1030 19:10:36.902852  417097 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1030 19:10:36.902862  417097 command_runner.go:130] > # 	"operations_errors_total",
	I1030 19:10:36.902872  417097 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1030 19:10:36.902883  417097 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1030 19:10:36.902892  417097 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1030 19:10:36.902897  417097 command_runner.go:130] > # 	"image_pulls_success_total",
	I1030 19:10:36.902903  417097 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1030 19:10:36.902907  417097 command_runner.go:130] > # 	"containers_oom_count_total",
	I1030 19:10:36.902916  417097 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1030 19:10:36.902923  417097 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1030 19:10:36.902931  417097 command_runner.go:130] > # ]
	I1030 19:10:36.902941  417097 command_runner.go:130] > # The port on which the metrics server will listen.
	I1030 19:10:36.902950  417097 command_runner.go:130] > # metrics_port = 9090
	I1030 19:10:36.902959  417097 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1030 19:10:36.902969  417097 command_runner.go:130] > # metrics_socket = ""
	I1030 19:10:36.902977  417097 command_runner.go:130] > # The certificate for the secure metrics server.
	I1030 19:10:36.902990  417097 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1030 19:10:36.902999  417097 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1030 19:10:36.903004  417097 command_runner.go:130] > # certificate on any modification event.
	I1030 19:10:36.903010  417097 command_runner.go:130] > # metrics_cert = ""
	I1030 19:10:36.903015  417097 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1030 19:10:36.903022  417097 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1030 19:10:36.903026  417097 command_runner.go:130] > # metrics_key = ""
	I1030 19:10:36.903031  417097 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1030 19:10:36.903040  417097 command_runner.go:130] > [crio.tracing]
	I1030 19:10:36.903049  417097 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1030 19:10:36.903060  417097 command_runner.go:130] > # enable_tracing = false
	I1030 19:10:36.903069  417097 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1030 19:10:36.903080  417097 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1030 19:10:36.903094  417097 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1030 19:10:36.903104  417097 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1030 19:10:36.903114  417097 command_runner.go:130] > # CRI-O NRI configuration.
	I1030 19:10:36.903121  417097 command_runner.go:130] > [crio.nri]
	I1030 19:10:36.903127  417097 command_runner.go:130] > # Globally enable or disable NRI.
	I1030 19:10:36.903132  417097 command_runner.go:130] > # enable_nri = false
	I1030 19:10:36.903137  417097 command_runner.go:130] > # NRI socket to listen on.
	I1030 19:10:36.903143  417097 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1030 19:10:36.903152  417097 command_runner.go:130] > # NRI plugin directory to use.
	I1030 19:10:36.903161  417097 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1030 19:10:36.903172  417097 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1030 19:10:36.903183  417097 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1030 19:10:36.903193  417097 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1030 19:10:36.903202  417097 command_runner.go:130] > # nri_disable_connections = false
	I1030 19:10:36.903213  417097 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1030 19:10:36.903221  417097 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1030 19:10:36.903231  417097 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1030 19:10:36.903241  417097 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1030 19:10:36.903255  417097 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1030 19:10:36.903264  417097 command_runner.go:130] > [crio.stats]
	I1030 19:10:36.903276  417097 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1030 19:10:36.903288  417097 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1030 19:10:36.903297  417097 command_runner.go:130] > # stats_collection_period = 0
	I1030 19:10:36.903349  417097 command_runner.go:130] ! time="2024-10-30 19:10:36.858760615Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1030 19:10:36.903379  417097 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1030 19:10:36.903463  417097 cni.go:84] Creating CNI manager for ""
	I1030 19:10:36.903481  417097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1030 19:10:36.903498  417097 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:10:36.903530  417097 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-743795 NodeName:multinode-743795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:10:36.903701  417097 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-743795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.241"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:10:36.903780  417097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:10:36.913914  417097 command_runner.go:130] > kubeadm
	I1030 19:10:36.913936  417097 command_runner.go:130] > kubectl
	I1030 19:10:36.913943  417097 command_runner.go:130] > kubelet
	I1030 19:10:36.914042  417097 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:10:36.914094  417097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:10:36.923576  417097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1030 19:10:36.940310  417097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:10:36.956331  417097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1030 19:10:36.972514  417097 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I1030 19:10:36.976343  417097 command_runner.go:130] > 192.168.39.241	control-plane.minikube.internal
	I1030 19:10:36.976438  417097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:10:37.115593  417097 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:10:37.130317  417097 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795 for IP: 192.168.39.241
	I1030 19:10:37.130340  417097 certs.go:194] generating shared ca certs ...
	I1030 19:10:37.130358  417097 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:10:37.130557  417097 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:10:37.130619  417097 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:10:37.130635  417097 certs.go:256] generating profile certs ...
	I1030 19:10:37.130736  417097 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/client.key
	I1030 19:10:37.130817  417097 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.key.dc4f52b7
	I1030 19:10:37.130873  417097 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.key
	I1030 19:10:37.130892  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 19:10:37.130914  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 19:10:37.130933  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 19:10:37.130952  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 19:10:37.130970  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 19:10:37.130989  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 19:10:37.131010  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 19:10:37.131028  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 19:10:37.131094  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:10:37.131136  417097 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:10:37.131150  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:10:37.131196  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:10:37.131231  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:10:37.131267  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:10:37.131328  417097 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:10:37.131371  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.131392  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.131411  417097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem -> /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.132054  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:10:37.157354  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:10:37.182104  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:10:37.206124  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:10:37.229331  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:10:37.252523  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:10:37.276962  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:10:37.302386  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/multinode-743795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:10:37.326001  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:10:37.349312  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:10:37.372631  417097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:10:37.395470  417097 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:10:37.411500  417097 ssh_runner.go:195] Run: openssl version
	I1030 19:10:37.417185  417097 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1030 19:10:37.417502  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:10:37.427822  417097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.432091  417097 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.432200  417097 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.432244  417097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:10:37.437624  417097 command_runner.go:130] > 3ec20f2e
	I1030 19:10:37.437793  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:10:37.446512  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:10:37.457156  417097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.461446  417097 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.461631  417097 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.461674  417097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:10:37.467148  417097 command_runner.go:130] > b5213941
	I1030 19:10:37.467208  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:10:37.476014  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:10:37.489181  417097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.493699  417097 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.493910  417097 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.493952  417097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:10:37.499353  417097 command_runner.go:130] > 51391683
	I1030 19:10:37.499433  417097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:10:37.508636  417097 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:10:37.513153  417097 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:10:37.513172  417097 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1030 19:10:37.513178  417097 command_runner.go:130] > Device: 253,1	Inode: 9432622     Links: 1
	I1030 19:10:37.513184  417097 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 19:10:37.513191  417097 command_runner.go:130] > Access: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513199  417097 command_runner.go:130] > Modify: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513204  417097 command_runner.go:130] > Change: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513211  417097 command_runner.go:130] >  Birth: 2024-10-30 19:03:38.557833291 +0000
	I1030 19:10:37.513254  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:10:37.518621  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.518795  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:10:37.524058  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.524222  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:10:37.529621  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.529676  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:10:37.535002  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.535047  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:10:37.540275  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.540337  417097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:10:37.545435  417097 command_runner.go:130] > Certificate will not expire
	I1030 19:10:37.545637  417097 kubeadm.go:392] StartCluster: {Name:multinode-743795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-743795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:10:37.545767  417097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:10:37.545833  417097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:10:37.582395  417097 command_runner.go:130] > d7ea51256edb36e89400017a915aa39e52bc2edb76f0d7e2ce71d2a2409dcdf4
	I1030 19:10:37.582429  417097 command_runner.go:130] > c015d554fd62507a43f20864108e4d45332f484a315a410839993edfd140f747
	I1030 19:10:37.582437  417097 command_runner.go:130] > 76044db673948f4d099ea5546b6aecc8d2bc9689f6622bc97ef8d8be31651687
	I1030 19:10:37.582447  417097 command_runner.go:130] > 21cab845b533c8720ff4411c06a91fe69a928684f4f0863a6063c6f41c268291
	I1030 19:10:37.582456  417097 command_runner.go:130] > 264f7e0c37ee81544595fc9dd70dce40503b741f0e5043ad55f1c1a23554f78d
	I1030 19:10:37.582465  417097 command_runner.go:130] > c0d05b91dab5b163ce79a278da80f7a0d70f3e267a1ca686b99bff1f77f7761d
	I1030 19:10:37.582478  417097 command_runner.go:130] > 7958cff51a7a63767038880bf9546bb5bbc44c8c92de409213d2841a70aa64da
	I1030 19:10:37.582506  417097 command_runner.go:130] > 0a43a1830349e1340a3ffc7d129797e44386b56973110596507497fb62727406
	I1030 19:10:37.582535  417097 cri.go:89] found id: "d7ea51256edb36e89400017a915aa39e52bc2edb76f0d7e2ce71d2a2409dcdf4"
	I1030 19:10:37.582546  417097 cri.go:89] found id: "c015d554fd62507a43f20864108e4d45332f484a315a410839993edfd140f747"
	I1030 19:10:37.582554  417097 cri.go:89] found id: "76044db673948f4d099ea5546b6aecc8d2bc9689f6622bc97ef8d8be31651687"
	I1030 19:10:37.582559  417097 cri.go:89] found id: "21cab845b533c8720ff4411c06a91fe69a928684f4f0863a6063c6f41c268291"
	I1030 19:10:37.582565  417097 cri.go:89] found id: "264f7e0c37ee81544595fc9dd70dce40503b741f0e5043ad55f1c1a23554f78d"
	I1030 19:10:37.582575  417097 cri.go:89] found id: "c0d05b91dab5b163ce79a278da80f7a0d70f3e267a1ca686b99bff1f77f7761d"
	I1030 19:10:37.582580  417097 cri.go:89] found id: "7958cff51a7a63767038880bf9546bb5bbc44c8c92de409213d2841a70aa64da"
	I1030 19:10:37.582583  417097 cri.go:89] found id: "0a43a1830349e1340a3ffc7d129797e44386b56973110596507497fb62727406"
	I1030 19:10:37.582585  417097 cri.go:89] found id: ""
	I1030 19:10:37.582627  417097 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-743795 -n multinode-743795
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-743795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.22s)

                                                
                                    
x
+
TestPreload (240.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-719843 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1030 19:20:18.709705  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-719843 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m28.717580845s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-719843 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-719843 image pull gcr.io/k8s-minikube/busybox: (5.609403126s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-719843
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-719843: (7.302722816s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-719843 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-719843 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.42457916s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-719843 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-30 19:23:10.151513241 +0000 UTC m=+3746.898696771
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-719843 -n test-preload-719843
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-719843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-719843 logs -n 25: (1.06849876s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795 sudo cat                                       | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m03_multinode-743795.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt                       | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m02:/home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n                                                                 | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | multinode-743795-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-743795 ssh -n multinode-743795-m02 sudo cat                                   | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	|         | /home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-743795 node stop m03                                                          | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:06 UTC |
	| node    | multinode-743795 node start                                                             | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:06 UTC | 30 Oct 24 19:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-743795                                                                | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:07 UTC |                     |
	| stop    | -p multinode-743795                                                                     | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:07 UTC |                     |
	| start   | -p multinode-743795                                                                     | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:09 UTC | 30 Oct 24 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-743795                                                                | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:12 UTC |                     |
	| node    | multinode-743795 node delete                                                            | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:12 UTC | 30 Oct 24 19:12 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-743795 stop                                                                   | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:12 UTC |                     |
	| start   | -p multinode-743795                                                                     | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:14 UTC | 30 Oct 24 19:18 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-743795                                                                | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:18 UTC |                     |
	| start   | -p multinode-743795-m02                                                                 | multinode-743795-m02 | jenkins | v1.34.0 | 30 Oct 24 19:18 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-743795-m03                                                                 | multinode-743795-m03 | jenkins | v1.34.0 | 30 Oct 24 19:18 UTC | 30 Oct 24 19:19 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-743795                                                                 | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:19 UTC |                     |
	| delete  | -p multinode-743795-m03                                                                 | multinode-743795-m03 | jenkins | v1.34.0 | 30 Oct 24 19:19 UTC | 30 Oct 24 19:19 UTC |
	| delete  | -p multinode-743795                                                                     | multinode-743795     | jenkins | v1.34.0 | 30 Oct 24 19:19 UTC | 30 Oct 24 19:19 UTC |
	| start   | -p test-preload-719843                                                                  | test-preload-719843  | jenkins | v1.34.0 | 30 Oct 24 19:19 UTC | 30 Oct 24 19:21 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-719843 image pull                                                          | test-preload-719843  | jenkins | v1.34.0 | 30 Oct 24 19:21 UTC | 30 Oct 24 19:21 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-719843                                                                  | test-preload-719843  | jenkins | v1.34.0 | 30 Oct 24 19:21 UTC | 30 Oct 24 19:21 UTC |
	| start   | -p test-preload-719843                                                                  | test-preload-719843  | jenkins | v1.34.0 | 30 Oct 24 19:21 UTC | 30 Oct 24 19:23 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-719843 image list                                                          | test-preload-719843  | jenkins | v1.34.0 | 30 Oct 24 19:23 UTC | 30 Oct 24 19:23 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:21:54
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:21:54.547473  422178 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:21:54.547597  422178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:21:54.547606  422178 out.go:358] Setting ErrFile to fd 2...
	I1030 19:21:54.547611  422178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:21:54.547818  422178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:21:54.548344  422178 out.go:352] Setting JSON to false
	I1030 19:21:54.549285  422178 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11058,"bootTime":1730305057,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:21:54.549393  422178 start.go:139] virtualization: kvm guest
	I1030 19:21:54.552092  422178 out.go:177] * [test-preload-719843] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:21:54.554077  422178 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:21:54.554149  422178 notify.go:220] Checking for updates...
	I1030 19:21:54.556736  422178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:21:54.558087  422178 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:21:54.559268  422178 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:21:54.560476  422178 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:21:54.561872  422178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:21:54.563834  422178 config.go:182] Loaded profile config "test-preload-719843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1030 19:21:54.564202  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:21:54.564251  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:21:54.578888  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45979
	I1030 19:21:54.579357  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:21:54.579918  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:21:54.579946  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:21:54.580309  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:21:54.580467  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:21:54.582325  422178 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:21:54.583620  422178 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:21:54.583936  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:21:54.583975  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:21:54.598368  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I1030 19:21:54.598831  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:21:54.599309  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:21:54.599325  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:21:54.599638  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:21:54.599798  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:21:54.633200  422178 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:21:54.634717  422178 start.go:297] selected driver: kvm2
	I1030 19:21:54.634731  422178 start.go:901] validating driver "kvm2" against &{Name:test-preload-719843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-719843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:21:54.634882  422178 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:21:54.635594  422178 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:21:54.635677  422178 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:21:54.650618  422178 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:21:54.651039  422178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:21:54.651077  422178 cni.go:84] Creating CNI manager for ""
	I1030 19:21:54.651126  422178 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:21:54.651183  422178 start.go:340] cluster config:
	{Name:test-preload-719843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-719843 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:21:54.651312  422178 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:21:54.657298  422178 out.go:177] * Starting "test-preload-719843" primary control-plane node in "test-preload-719843" cluster
	I1030 19:21:54.662216  422178 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1030 19:21:54.821038  422178 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1030 19:21:54.821073  422178 cache.go:56] Caching tarball of preloaded images
	I1030 19:21:54.821219  422178 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1030 19:21:54.823119  422178 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1030 19:21:54.824571  422178 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1030 19:21:54.985662  422178 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1030 19:22:12.067174  422178 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1030 19:22:12.068282  422178 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1030 19:22:12.928072  422178 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1030 19:22:12.928217  422178 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/config.json ...
	I1030 19:22:12.928449  422178 start.go:360] acquireMachinesLock for test-preload-719843: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:22:12.928517  422178 start.go:364] duration metric: took 46.705µs to acquireMachinesLock for "test-preload-719843"
	I1030 19:22:12.928533  422178 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:22:12.928539  422178 fix.go:54] fixHost starting: 
	I1030 19:22:12.928822  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:22:12.928859  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:22:12.944536  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I1030 19:22:12.945098  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:22:12.945611  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:22:12.945636  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:22:12.945966  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:22:12.946200  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:12.946362  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetState
	I1030 19:22:12.947932  422178 fix.go:112] recreateIfNeeded on test-preload-719843: state=Stopped err=<nil>
	I1030 19:22:12.947976  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	W1030 19:22:12.948139  422178 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:22:12.950421  422178 out.go:177] * Restarting existing kvm2 VM for "test-preload-719843" ...
	I1030 19:22:12.951701  422178 main.go:141] libmachine: (test-preload-719843) Calling .Start
	I1030 19:22:12.951871  422178 main.go:141] libmachine: (test-preload-719843) Ensuring networks are active...
	I1030 19:22:12.952521  422178 main.go:141] libmachine: (test-preload-719843) Ensuring network default is active
	I1030 19:22:12.952778  422178 main.go:141] libmachine: (test-preload-719843) Ensuring network mk-test-preload-719843 is active
	I1030 19:22:12.953109  422178 main.go:141] libmachine: (test-preload-719843) Getting domain xml...
	I1030 19:22:12.953769  422178 main.go:141] libmachine: (test-preload-719843) Creating domain...
	I1030 19:22:14.146122  422178 main.go:141] libmachine: (test-preload-719843) Waiting to get IP...
	I1030 19:22:14.147000  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:14.147370  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:14.147426  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:14.147348  422262 retry.go:31] will retry after 188.33246ms: waiting for machine to come up
	I1030 19:22:14.337778  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:14.338251  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:14.338282  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:14.338195  422262 retry.go:31] will retry after 309.041071ms: waiting for machine to come up
	I1030 19:22:14.648708  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:14.649073  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:14.649097  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:14.649025  422262 retry.go:31] will retry after 410.393476ms: waiting for machine to come up
	I1030 19:22:15.060750  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:15.061250  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:15.061289  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:15.061189  422262 retry.go:31] will retry after 602.046025ms: waiting for machine to come up
	I1030 19:22:15.664847  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:15.665427  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:15.665456  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:15.665363  422262 retry.go:31] will retry after 705.400397ms: waiting for machine to come up
	I1030 19:22:16.372365  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:16.372821  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:16.372844  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:16.372775  422262 retry.go:31] will retry after 867.524034ms: waiting for machine to come up
	I1030 19:22:17.241807  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:17.242208  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:17.242234  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:17.242160  422262 retry.go:31] will retry after 1.062688567s: waiting for machine to come up
	I1030 19:22:18.306779  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:18.307144  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:18.307175  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:18.307086  422262 retry.go:31] will retry after 1.0133907s: waiting for machine to come up
	I1030 19:22:19.322377  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:19.322747  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:19.322774  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:19.322707  422262 retry.go:31] will retry after 1.701947401s: waiting for machine to come up
	I1030 19:22:21.026581  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:21.026929  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:21.026951  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:21.026874  422262 retry.go:31] will retry after 1.761980098s: waiting for machine to come up
	I1030 19:22:22.790346  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:22.790802  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:22.790829  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:22.790754  422262 retry.go:31] will retry after 2.055050365s: waiting for machine to come up
	I1030 19:22:24.847074  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:24.847456  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:24.847480  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:24.847421  422262 retry.go:31] will retry after 3.362135104s: waiting for machine to come up
	I1030 19:22:28.211408  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:28.211801  422178 main.go:141] libmachine: (test-preload-719843) DBG | unable to find current IP address of domain test-preload-719843 in network mk-test-preload-719843
	I1030 19:22:28.211832  422178 main.go:141] libmachine: (test-preload-719843) DBG | I1030 19:22:28.211760  422262 retry.go:31] will retry after 4.315900111s: waiting for machine to come up
	I1030 19:22:32.532211  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.532582  422178 main.go:141] libmachine: (test-preload-719843) Found IP for machine: 192.168.39.83
	I1030 19:22:32.532605  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has current primary IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.532611  422178 main.go:141] libmachine: (test-preload-719843) Reserving static IP address...
	I1030 19:22:32.533009  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "test-preload-719843", mac: "52:54:00:5b:e3:5f", ip: "192.168.39.83"} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:32.533036  422178 main.go:141] libmachine: (test-preload-719843) DBG | skip adding static IP to network mk-test-preload-719843 - found existing host DHCP lease matching {name: "test-preload-719843", mac: "52:54:00:5b:e3:5f", ip: "192.168.39.83"}
	I1030 19:22:32.533045  422178 main.go:141] libmachine: (test-preload-719843) Reserved static IP address: 192.168.39.83
	I1030 19:22:32.533058  422178 main.go:141] libmachine: (test-preload-719843) Waiting for SSH to be available...
	I1030 19:22:32.533067  422178 main.go:141] libmachine: (test-preload-719843) DBG | Getting to WaitForSSH function...
	I1030 19:22:32.535028  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.535388  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:32.535420  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.535538  422178 main.go:141] libmachine: (test-preload-719843) DBG | Using SSH client type: external
	I1030 19:22:32.535565  422178 main.go:141] libmachine: (test-preload-719843) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa (-rw-------)
	I1030 19:22:32.535596  422178 main.go:141] libmachine: (test-preload-719843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:22:32.535607  422178 main.go:141] libmachine: (test-preload-719843) DBG | About to run SSH command:
	I1030 19:22:32.535622  422178 main.go:141] libmachine: (test-preload-719843) DBG | exit 0
	I1030 19:22:32.658572  422178 main.go:141] libmachine: (test-preload-719843) DBG | SSH cmd err, output: <nil>: 
	I1030 19:22:32.658919  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetConfigRaw
	I1030 19:22:32.659595  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetIP
	I1030 19:22:32.661980  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.662318  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:32.662343  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.662614  422178 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/config.json ...
	I1030 19:22:32.662809  422178 machine.go:93] provisionDockerMachine start ...
	I1030 19:22:32.662830  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:32.663076  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:32.665398  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.665769  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:32.665803  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.665907  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:32.666147  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:32.666288  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:32.666413  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:32.666593  422178 main.go:141] libmachine: Using SSH client type: native
	I1030 19:22:32.666800  422178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1030 19:22:32.666811  422178 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:22:32.770810  422178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:22:32.770843  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetMachineName
	I1030 19:22:32.771091  422178 buildroot.go:166] provisioning hostname "test-preload-719843"
	I1030 19:22:32.771121  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetMachineName
	I1030 19:22:32.771285  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:32.773902  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.774207  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:32.774241  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.774388  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:32.774561  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:32.774716  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:32.774819  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:32.775024  422178 main.go:141] libmachine: Using SSH client type: native
	I1030 19:22:32.775244  422178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1030 19:22:32.775261  422178 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-719843 && echo "test-preload-719843" | sudo tee /etc/hostname
	I1030 19:22:32.893187  422178 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-719843
	
	I1030 19:22:32.893212  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:32.896196  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.896575  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:32.896605  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:32.896871  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:32.897036  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:32.897177  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:32.897326  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:32.897465  422178 main.go:141] libmachine: Using SSH client type: native
	I1030 19:22:32.897692  422178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1030 19:22:32.897716  422178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-719843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-719843/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-719843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:22:33.008221  422178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:22:33.008256  422178 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:22:33.008320  422178 buildroot.go:174] setting up certificates
	I1030 19:22:33.008336  422178 provision.go:84] configureAuth start
	I1030 19:22:33.008354  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetMachineName
	I1030 19:22:33.008688  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetIP
	I1030 19:22:33.011268  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.011580  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.011609  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.011791  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:33.013876  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.014161  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.014185  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.014306  422178 provision.go:143] copyHostCerts
	I1030 19:22:33.014371  422178 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:22:33.014385  422178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:22:33.014457  422178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:22:33.014598  422178 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:22:33.014611  422178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:22:33.014645  422178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:22:33.014727  422178 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:22:33.014734  422178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:22:33.014757  422178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:22:33.014812  422178 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.test-preload-719843 san=[127.0.0.1 192.168.39.83 localhost minikube test-preload-719843]
	I1030 19:22:33.091861  422178 provision.go:177] copyRemoteCerts
	I1030 19:22:33.091923  422178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:22:33.091956  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:33.094730  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.095065  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.095093  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.095240  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:33.095396  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.095501  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:33.095652  422178 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa Username:docker}
	I1030 19:22:33.176847  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:22:33.200395  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1030 19:22:33.222953  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:22:33.245136  422178 provision.go:87] duration metric: took 236.785923ms to configureAuth
	I1030 19:22:33.245163  422178 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:22:33.245331  422178 config.go:182] Loaded profile config "test-preload-719843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1030 19:22:33.245422  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:33.248022  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.248335  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.248366  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.248475  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:33.248672  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.248821  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.248964  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:33.249142  422178 main.go:141] libmachine: Using SSH client type: native
	I1030 19:22:33.249308  422178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1030 19:22:33.249323  422178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:22:33.485827  422178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:22:33.485856  422178 machine.go:96] duration metric: took 823.03188ms to provisionDockerMachine
	I1030 19:22:33.485878  422178 start.go:293] postStartSetup for "test-preload-719843" (driver="kvm2")
	I1030 19:22:33.485893  422178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:22:33.485919  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:33.486268  422178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:22:33.486313  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:33.488958  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.489291  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.489316  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.489421  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:33.489575  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.489697  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:33.489796  422178 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa Username:docker}
	I1030 19:22:33.569569  422178 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:22:33.573625  422178 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:22:33.573655  422178 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:22:33.573734  422178 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:22:33.573815  422178 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:22:33.573901  422178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:22:33.583431  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:22:33.605926  422178 start.go:296] duration metric: took 120.032329ms for postStartSetup
	I1030 19:22:33.605974  422178 fix.go:56] duration metric: took 20.677433749s for fixHost
	I1030 19:22:33.606003  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:33.608280  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.608666  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.608697  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.608844  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:33.609030  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.609157  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.609273  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:33.609402  422178 main.go:141] libmachine: Using SSH client type: native
	I1030 19:22:33.609614  422178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1030 19:22:33.609629  422178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:22:33.711269  422178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730316153.685149949
	
	I1030 19:22:33.711310  422178 fix.go:216] guest clock: 1730316153.685149949
	I1030 19:22:33.711321  422178 fix.go:229] Guest: 2024-10-30 19:22:33.685149949 +0000 UTC Remote: 2024-10-30 19:22:33.605981377 +0000 UTC m=+39.096912649 (delta=79.168572ms)
	I1030 19:22:33.711363  422178 fix.go:200] guest clock delta is within tolerance: 79.168572ms
	I1030 19:22:33.711368  422178 start.go:83] releasing machines lock for "test-preload-719843", held for 20.782841318s
	I1030 19:22:33.711393  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:33.711643  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetIP
	I1030 19:22:33.714357  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.714719  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.714754  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.714908  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:33.715407  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:33.715594  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:33.715713  422178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:22:33.715754  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:33.715819  422178 ssh_runner.go:195] Run: cat /version.json
	I1030 19:22:33.715847  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:33.718101  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.718368  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.718400  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.718424  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.718546  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:33.718700  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.718785  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:33.718815  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:33.718850  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:33.719004  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:33.719010  422178 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa Username:docker}
	I1030 19:22:33.719157  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:33.719305  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:33.719450  422178 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa Username:docker}
	I1030 19:22:33.823284  422178 ssh_runner.go:195] Run: systemctl --version
	I1030 19:22:33.829308  422178 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:22:33.970258  422178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:22:33.976422  422178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:22:33.976498  422178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:22:33.991972  422178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:22:33.991999  422178 start.go:495] detecting cgroup driver to use...
	I1030 19:22:33.992057  422178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:22:34.008008  422178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:22:34.021973  422178 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:22:34.022020  422178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:22:34.035720  422178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:22:34.052068  422178 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:22:34.184297  422178 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:22:34.359208  422178 docker.go:233] disabling docker service ...
	I1030 19:22:34.359288  422178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:22:34.374141  422178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:22:34.386995  422178 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:22:34.506780  422178 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:22:34.626573  422178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:22:34.640480  422178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:22:34.658901  422178 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1030 19:22:34.658958  422178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:22:34.669636  422178 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:22:34.669704  422178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:22:34.680121  422178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:22:34.690181  422178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:22:34.700407  422178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:22:34.710921  422178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:22:34.720966  422178 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:22:34.737133  422178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:22:34.747266  422178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:22:34.756661  422178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:22:34.756712  422178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:22:34.769381  422178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:22:34.779033  422178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:22:34.898426  422178 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:22:34.986456  422178 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:22:34.986553  422178 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:22:34.991190  422178 start.go:563] Will wait 60s for crictl version
	I1030 19:22:34.991238  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:34.994900  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:22:35.033736  422178 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:22:35.033823  422178 ssh_runner.go:195] Run: crio --version
	I1030 19:22:35.061731  422178 ssh_runner.go:195] Run: crio --version
	I1030 19:22:35.091988  422178 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1030 19:22:35.093223  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetIP
	I1030 19:22:35.096000  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:35.096348  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:35.096381  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:35.096606  422178 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:22:35.100915  422178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:22:35.113704  422178 kubeadm.go:883] updating cluster {Name:test-preload-719843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-719843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:22:35.113818  422178 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1030 19:22:35.113861  422178 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:22:35.149645  422178 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1030 19:22:35.149729  422178 ssh_runner.go:195] Run: which lz4
	I1030 19:22:35.153947  422178 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:22:35.158277  422178 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:22:35.158312  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1030 19:22:36.648915  422178 crio.go:462] duration metric: took 1.495006143s to copy over tarball
	I1030 19:22:36.649012  422178 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:22:38.975802  422178 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.326756925s)
	I1030 19:22:38.975832  422178 crio.go:469] duration metric: took 2.326881117s to extract the tarball
	I1030 19:22:38.975840  422178 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:22:39.017011  422178 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:22:39.061384  422178 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1030 19:22:39.061410  422178 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:22:39.061461  422178 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:22:39.061493  422178 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1030 19:22:39.061501  422178 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1030 19:22:39.061507  422178 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 19:22:39.061547  422178 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1030 19:22:39.061548  422178 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1030 19:22:39.061534  422178 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1030 19:22:39.061583  422178 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 19:22:39.062939  422178 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:22:39.062941  422178 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1030 19:22:39.063014  422178 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1030 19:22:39.063020  422178 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 19:22:39.063020  422178 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 19:22:39.062946  422178 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1030 19:22:39.063043  422178 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1030 19:22:39.063043  422178 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1030 19:22:39.282725  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1030 19:22:39.288346  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 19:22:39.338121  422178 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1030 19:22:39.338166  422178 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1030 19:22:39.338212  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:39.346890  422178 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1030 19:22:39.346936  422178 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 19:22:39.346957  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1030 19:22:39.346982  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:39.350796  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 19:22:39.376826  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1030 19:22:39.379521  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1030 19:22:39.380292  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1030 19:22:39.382409  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1030 19:22:39.416478  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 19:22:39.416505  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1030 19:22:39.418865  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1030 19:22:39.479209  422178 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1030 19:22:39.479307  422178 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1030 19:22:39.479357  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:39.529355  422178 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1030 19:22:39.529411  422178 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 19:22:39.529486  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:39.532584  422178 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1030 19:22:39.532626  422178 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1030 19:22:39.532670  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:39.555505  422178 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1030 19:22:39.555557  422178 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1030 19:22:39.555598  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:39.568353  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1030 19:22:39.568452  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 19:22:39.586505  422178 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1030 19:22:39.586538  422178 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1030 19:22:39.586581  422178 ssh_runner.go:195] Run: which crictl
	I1030 19:22:39.586618  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1030 19:22:39.586673  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1030 19:22:39.586696  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1030 19:22:39.586773  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1030 19:22:39.665602  422178 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1030 19:22:39.665660  422178 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1030 19:22:39.665716  422178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1030 19:22:39.665758  422178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1030 19:22:39.711747  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1030 19:22:39.711778  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1030 19:22:39.711822  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1030 19:22:39.711888  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1030 19:22:39.711995  422178 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1030 19:22:39.712030  422178 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1030 19:22:39.712038  422178 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1030 19:22:39.712062  422178 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1030 19:22:39.712245  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1030 19:22:39.812989  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1030 19:22:39.851167  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1030 19:22:39.851168  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1030 19:22:41.255680  422178 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:22:42.783829  422178 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.071908874s)
	I1030 19:22:42.783882  422178 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.071795213s)
	I1030 19:22:42.783907  422178 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1030 19:22:42.783934  422178 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1030 19:22:42.783935  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1030 19:22:42.783975  422178 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1030 19:22:42.783987  422178 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.071722293s)
	I1030 19:22:42.784043  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1030 19:22:42.784067  422178 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.971049343s)
	I1030 19:22:42.784110  422178 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1030 19:22:42.784143  422178 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.932940723s)
	I1030 19:22:42.784181  422178 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1030 19:22:42.784241  422178 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (2.932992645s)
	I1030 19:22:42.784269  422178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1030 19:22:42.784279  422178 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1030 19:22:42.784289  422178 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.528568505s)
	I1030 19:22:42.784367  422178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1030 19:22:43.484975  422178 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1030 19:22:43.485044  422178 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1030 19:22:43.485060  422178 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1030 19:22:43.485141  422178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1030 19:22:43.485151  422178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1030 19:22:43.485186  422178 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1030 19:22:43.485201  422178 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1030 19:22:43.485205  422178 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1030 19:22:43.485141  422178 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1030 19:22:43.485241  422178 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1030 19:22:43.485257  422178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1030 19:22:43.489968  422178 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1030 19:22:43.493895  422178 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1030 19:22:43.944400  422178 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1030 19:22:43.944448  422178 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1030 19:22:43.944480  422178 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1030 19:22:43.944532  422178 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1030 19:22:44.089702  422178 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1030 19:22:44.089750  422178 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1030 19:22:44.089807  422178 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1030 19:22:46.133417  422178 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.043584085s)
	I1030 19:22:46.133454  422178 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1030 19:22:46.133477  422178 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1030 19:22:46.133519  422178 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1030 19:22:46.877438  422178 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1030 19:22:46.877500  422178 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1030 19:22:46.877588  422178 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1030 19:22:47.220081  422178 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1030 19:22:47.220132  422178 cache_images.go:123] Successfully loaded all cached images
	I1030 19:22:47.220138  422178 cache_images.go:92] duration metric: took 8.158717562s to LoadCachedImages
	I1030 19:22:47.220153  422178 kubeadm.go:934] updating node { 192.168.39.83 8443 v1.24.4 crio true true} ...
	I1030 19:22:47.220283  422178 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-719843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-719843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:22:47.220366  422178 ssh_runner.go:195] Run: crio config
	I1030 19:22:47.263198  422178 cni.go:84] Creating CNI manager for ""
	I1030 19:22:47.263222  422178 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:22:47.263233  422178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:22:47.263252  422178 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.83 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-719843 NodeName:test-preload-719843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:22:47.263388  422178 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-719843"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:22:47.263454  422178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1030 19:22:47.273497  422178 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:22:47.273557  422178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:22:47.282988  422178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1030 19:22:47.298777  422178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:22:47.314329  422178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1030 19:22:47.330447  422178 ssh_runner.go:195] Run: grep 192.168.39.83	control-plane.minikube.internal$ /etc/hosts
	I1030 19:22:47.334152  422178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:22:47.345891  422178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:22:47.462037  422178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:22:47.479778  422178 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843 for IP: 192.168.39.83
	I1030 19:22:47.479807  422178 certs.go:194] generating shared ca certs ...
	I1030 19:22:47.479830  422178 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:22:47.480022  422178 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:22:47.480079  422178 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:22:47.480093  422178 certs.go:256] generating profile certs ...
	I1030 19:22:47.480230  422178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/client.key
	I1030 19:22:47.480333  422178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/apiserver.key.578e2c50
	I1030 19:22:47.480414  422178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/proxy-client.key
	I1030 19:22:47.480602  422178 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:22:47.480650  422178 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:22:47.480663  422178 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:22:47.480703  422178 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:22:47.480728  422178 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:22:47.480749  422178 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:22:47.480794  422178 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:22:47.481640  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:22:47.517562  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:22:47.546290  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:22:47.575470  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:22:47.600860  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1030 19:22:47.631125  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 19:22:47.658083  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:22:47.687809  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:22:47.714366  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:22:47.736621  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:22:47.759026  422178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:22:47.781318  422178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:22:47.797647  422178 ssh_runner.go:195] Run: openssl version
	I1030 19:22:47.803423  422178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:22:47.813580  422178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:22:47.818209  422178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:22:47.818255  422178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:22:47.823930  422178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:22:47.834471  422178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:22:47.845052  422178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:22:47.849356  422178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:22:47.849403  422178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:22:47.854811  422178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:22:47.865309  422178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:22:47.875824  422178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:22:47.880143  422178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:22:47.880186  422178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:22:47.885661  422178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:22:47.896138  422178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:22:47.900745  422178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:22:47.906633  422178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:22:47.912284  422178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:22:47.918147  422178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:22:47.923793  422178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:22:47.929421  422178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:22:47.935256  422178 kubeadm.go:392] StartCluster: {Name:test-preload-719843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-719843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:22:47.935364  422178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:22:47.935417  422178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:22:47.970961  422178 cri.go:89] found id: ""
	I1030 19:22:47.971023  422178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:22:47.980945  422178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:22:47.980968  422178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:22:47.981029  422178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:22:47.990444  422178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:22:47.990920  422178 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-719843" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:22:47.991055  422178 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-719843" cluster setting kubeconfig missing "test-preload-719843" context setting]
	I1030 19:22:47.991476  422178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:22:47.992093  422178 kapi.go:59] client config for test-preload-719843: &rest.Config{Host:"https://192.168.39.83:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 19:22:47.992822  422178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:22:48.001899  422178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.83
	I1030 19:22:48.001932  422178 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:22:48.001945  422178 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:22:48.001986  422178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:22:48.038281  422178 cri.go:89] found id: ""
	I1030 19:22:48.038349  422178 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:22:48.053745  422178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:22:48.063063  422178 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:22:48.063082  422178 kubeadm.go:157] found existing configuration files:
	
	I1030 19:22:48.063135  422178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:22:48.072364  422178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:22:48.072410  422178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:22:48.081574  422178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:22:48.090510  422178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:22:48.090566  422178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:22:48.099765  422178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:22:48.108415  422178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:22:48.108473  422178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:22:48.117844  422178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:22:48.126789  422178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:22:48.126838  422178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:22:48.136015  422178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:22:48.145535  422178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:22:48.250034  422178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:22:49.572611  422178 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.32253214s)
	I1030 19:22:49.572656  422178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:22:49.844093  422178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:22:49.918594  422178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:22:50.026972  422178 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:22:50.027068  422178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:22:50.527755  422178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:22:51.027242  422178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:22:51.058950  422178 api_server.go:72] duration metric: took 1.031975903s to wait for apiserver process to appear ...
	I1030 19:22:51.058984  422178 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:22:51.059010  422178 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1030 19:22:51.059644  422178 api_server.go:269] stopped: https://192.168.39.83:8443/healthz: Get "https://192.168.39.83:8443/healthz": dial tcp 192.168.39.83:8443: connect: connection refused
	I1030 19:22:51.559342  422178 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1030 19:22:54.689101  422178 api_server.go:279] https://192.168.39.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:22:54.689140  422178 api_server.go:103] status: https://192.168.39.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:22:54.689159  422178 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1030 19:22:54.702088  422178 api_server.go:279] https://192.168.39.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:22:54.702123  422178 api_server.go:103] status: https://192.168.39.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:22:55.059583  422178 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1030 19:22:55.067495  422178 api_server.go:279] https://192.168.39.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:22:55.067535  422178 api_server.go:103] status: https://192.168.39.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:22:55.559142  422178 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1030 19:22:55.564390  422178 api_server.go:279] https://192.168.39.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:22:55.564415  422178 api_server.go:103] status: https://192.168.39.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:22:56.059669  422178 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1030 19:22:56.076145  422178 api_server.go:279] https://192.168.39.83:8443/healthz returned 200:
	ok
	I1030 19:22:56.084842  422178 api_server.go:141] control plane version: v1.24.4
	I1030 19:22:56.084870  422178 api_server.go:131] duration metric: took 5.025877425s to wait for apiserver health ...
	I1030 19:22:56.084882  422178 cni.go:84] Creating CNI manager for ""
	I1030 19:22:56.084890  422178 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:22:56.086690  422178 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:22:56.088278  422178 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:22:56.112896  422178 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:22:56.138421  422178 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:22:56.138529  422178 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 19:22:56.138548  422178 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 19:22:56.162122  422178 system_pods.go:59] 7 kube-system pods found
	I1030 19:22:56.162158  422178 system_pods.go:61] "coredns-6d4b75cb6d-c6prj" [b270ca58-a405-4023-82b8-9f76efa25660] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:22:56.162164  422178 system_pods.go:61] "etcd-test-preload-719843" [54a8e79d-2603-4a3c-b1ff-a7418e9b6a7e] Running
	I1030 19:22:56.162170  422178 system_pods.go:61] "kube-apiserver-test-preload-719843" [bf68b216-3b72-4057-b1ed-26c7e6e3ac81] Running
	I1030 19:22:56.162178  422178 system_pods.go:61] "kube-controller-manager-test-preload-719843" [cc921d26-7692-4029-b522-43643f229687] Running
	I1030 19:22:56.162183  422178 system_pods.go:61] "kube-proxy-5g85l" [4d11f6b7-7d06-4b8f-80cf-5091762be2eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 19:22:56.162187  422178 system_pods.go:61] "kube-scheduler-test-preload-719843" [76f2b508-25b4-492f-807a-12c7c259f0dd] Running
	I1030 19:22:56.162193  422178 system_pods.go:61] "storage-provisioner" [824de9e0-a41e-4de9-abc8-ea585cccec33] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 19:22:56.162200  422178 system_pods.go:74] duration metric: took 23.757695ms to wait for pod list to return data ...
	I1030 19:22:56.162210  422178 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:22:56.165406  422178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:22:56.165432  422178 node_conditions.go:123] node cpu capacity is 2
	I1030 19:22:56.165443  422178 node_conditions.go:105] duration metric: took 3.228306ms to run NodePressure ...
	I1030 19:22:56.165464  422178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:22:56.392451  422178 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:22:56.399661  422178 kubeadm.go:739] kubelet initialised
	I1030 19:22:56.399681  422178 kubeadm.go:740] duration metric: took 7.206171ms waiting for restarted kubelet to initialise ...
	I1030 19:22:56.399695  422178 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:22:56.406123  422178 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-c6prj" in "kube-system" namespace to be "Ready" ...
	I1030 19:22:56.411998  422178 pod_ready.go:98] node "test-preload-719843" hosting pod "coredns-6d4b75cb6d-c6prj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.412019  422178 pod_ready.go:82] duration metric: took 5.875111ms for pod "coredns-6d4b75cb6d-c6prj" in "kube-system" namespace to be "Ready" ...
	E1030 19:22:56.412027  422178 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-719843" hosting pod "coredns-6d4b75cb6d-c6prj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.412033  422178 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:22:56.415600  422178 pod_ready.go:98] node "test-preload-719843" hosting pod "etcd-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.415619  422178 pod_ready.go:82] duration metric: took 3.577771ms for pod "etcd-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	E1030 19:22:56.415626  422178 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-719843" hosting pod "etcd-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.415631  422178 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:22:56.419133  422178 pod_ready.go:98] node "test-preload-719843" hosting pod "kube-apiserver-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.419156  422178 pod_ready.go:82] duration metric: took 3.51254ms for pod "kube-apiserver-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	E1030 19:22:56.419163  422178 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-719843" hosting pod "kube-apiserver-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.419169  422178 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:22:56.541802  422178 pod_ready.go:98] node "test-preload-719843" hosting pod "kube-controller-manager-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.541831  422178 pod_ready.go:82] duration metric: took 122.653891ms for pod "kube-controller-manager-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	E1030 19:22:56.541842  422178 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-719843" hosting pod "kube-controller-manager-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.541848  422178 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5g85l" in "kube-system" namespace to be "Ready" ...
	I1030 19:22:56.942684  422178 pod_ready.go:98] node "test-preload-719843" hosting pod "kube-proxy-5g85l" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.942713  422178 pod_ready.go:82] duration metric: took 400.856048ms for pod "kube-proxy-5g85l" in "kube-system" namespace to be "Ready" ...
	E1030 19:22:56.942722  422178 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-719843" hosting pod "kube-proxy-5g85l" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:56.942733  422178 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:22:57.341829  422178 pod_ready.go:98] node "test-preload-719843" hosting pod "kube-scheduler-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:57.341855  422178 pod_ready.go:82] duration metric: took 399.115739ms for pod "kube-scheduler-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	E1030 19:22:57.341864  422178 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-719843" hosting pod "kube-scheduler-test-preload-719843" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-719843" has status "Ready":"False"
	I1030 19:22:57.341872  422178 pod_ready.go:39] duration metric: took 942.168324ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:22:57.341890  422178 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:22:57.354050  422178 ops.go:34] apiserver oom_adj: -16
	I1030 19:22:57.354068  422178 kubeadm.go:597] duration metric: took 9.373094582s to restartPrimaryControlPlane
	I1030 19:22:57.354077  422178 kubeadm.go:394] duration metric: took 9.418834856s to StartCluster
	I1030 19:22:57.354094  422178 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:22:57.354166  422178 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:22:57.354894  422178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:22:57.355148  422178 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:22:57.355232  422178 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:22:57.355370  422178 config.go:182] Loaded profile config "test-preload-719843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1030 19:22:57.355368  422178 addons.go:69] Setting storage-provisioner=true in profile "test-preload-719843"
	I1030 19:22:57.355398  422178 addons.go:69] Setting default-storageclass=true in profile "test-preload-719843"
	I1030 19:22:57.355432  422178 addons.go:234] Setting addon storage-provisioner=true in "test-preload-719843"
	I1030 19:22:57.355444  422178 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-719843"
	W1030 19:22:57.355448  422178 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:22:57.355477  422178 host.go:66] Checking if "test-preload-719843" exists ...
	I1030 19:22:57.355747  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:22:57.355779  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:22:57.355866  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:22:57.355903  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:22:57.357063  422178 out.go:177] * Verifying Kubernetes components...
	I1030 19:22:57.358333  422178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:22:57.370977  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33837
	I1030 19:22:57.371427  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:22:57.371980  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:22:57.372005  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:22:57.372330  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:22:57.372514  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetState
	I1030 19:22:57.374862  422178 kapi.go:59] client config for test-preload-719843: &rest.Config{Host:"https://192.168.39.83:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/client.crt", KeyFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/profiles/test-preload-719843/client.key", CAFile:"/home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 19:22:57.375180  422178 addons.go:234] Setting addon default-storageclass=true in "test-preload-719843"
	W1030 19:22:57.375198  422178 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:22:57.375226  422178 host.go:66] Checking if "test-preload-719843" exists ...
	I1030 19:22:57.375260  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45367
	I1030 19:22:57.375757  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:22:57.375809  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:22:57.375883  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:22:57.376442  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:22:57.376469  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:22:57.376810  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:22:57.377261  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:22:57.377301  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:22:57.395361  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34983
	I1030 19:22:57.395392  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39307
	I1030 19:22:57.395893  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:22:57.395942  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:22:57.396435  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:22:57.396450  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:22:57.396578  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:22:57.396594  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:22:57.396797  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:22:57.396980  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:22:57.397103  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetState
	I1030 19:22:57.397414  422178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:22:57.397449  422178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:22:57.398732  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:57.400919  422178 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:22:57.402317  422178 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:22:57.402334  422178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:22:57.402348  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:57.405395  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:57.405845  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:57.405884  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:57.405994  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:57.406178  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:57.406334  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:57.406455  422178 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa Username:docker}
	I1030 19:22:57.437841  422178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I1030 19:22:57.438407  422178 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:22:57.438957  422178 main.go:141] libmachine: Using API Version  1
	I1030 19:22:57.438981  422178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:22:57.439415  422178 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:22:57.439610  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetState
	I1030 19:22:57.440954  422178 main.go:141] libmachine: (test-preload-719843) Calling .DriverName
	I1030 19:22:57.441185  422178 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:22:57.441201  422178 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:22:57.441217  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHHostname
	I1030 19:22:57.443625  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:57.443986  422178 main.go:141] libmachine: (test-preload-719843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:e3:5f", ip: ""} in network mk-test-preload-719843: {Iface:virbr1 ExpiryTime:2024-10-30 20:19:27 +0000 UTC Type:0 Mac:52:54:00:5b:e3:5f Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:test-preload-719843 Clientid:01:52:54:00:5b:e3:5f}
	I1030 19:22:57.444008  422178 main.go:141] libmachine: (test-preload-719843) DBG | domain test-preload-719843 has defined IP address 192.168.39.83 and MAC address 52:54:00:5b:e3:5f in network mk-test-preload-719843
	I1030 19:22:57.444135  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHPort
	I1030 19:22:57.444252  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHKeyPath
	I1030 19:22:57.444411  422178 main.go:141] libmachine: (test-preload-719843) Calling .GetSSHUsername
	I1030 19:22:57.444590  422178 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/test-preload-719843/id_rsa Username:docker}
	I1030 19:22:57.519947  422178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:22:57.536679  422178 node_ready.go:35] waiting up to 6m0s for node "test-preload-719843" to be "Ready" ...
	I1030 19:22:57.622192  422178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:22:57.640177  422178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:22:58.592668  422178 main.go:141] libmachine: Making call to close driver server
	I1030 19:22:58.592700  422178 main.go:141] libmachine: (test-preload-719843) Calling .Close
	I1030 19:22:58.592700  422178 main.go:141] libmachine: Making call to close driver server
	I1030 19:22:58.592718  422178 main.go:141] libmachine: (test-preload-719843) Calling .Close
	I1030 19:22:58.592989  422178 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:22:58.593019  422178 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:22:58.593029  422178 main.go:141] libmachine: Making call to close driver server
	I1030 19:22:58.593037  422178 main.go:141] libmachine: (test-preload-719843) Calling .Close
	I1030 19:22:58.593164  422178 main.go:141] libmachine: (test-preload-719843) DBG | Closing plugin on server side
	I1030 19:22:58.593189  422178 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:22:58.593218  422178 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:22:58.593227  422178 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:22:58.593236  422178 main.go:141] libmachine: Making call to close driver server
	I1030 19:22:58.593249  422178 main.go:141] libmachine: (test-preload-719843) Calling .Close
	I1030 19:22:58.593239  422178 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:22:58.593661  422178 main.go:141] libmachine: (test-preload-719843) DBG | Closing plugin on server side
	I1030 19:22:58.593698  422178 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:22:58.593709  422178 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:22:58.600342  422178 main.go:141] libmachine: Making call to close driver server
	I1030 19:22:58.600364  422178 main.go:141] libmachine: (test-preload-719843) Calling .Close
	I1030 19:22:58.600619  422178 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:22:58.600638  422178 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:22:58.603518  422178 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1030 19:22:58.605063  422178 addons.go:510] duration metric: took 1.249844413s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1030 19:22:59.540789  422178 node_ready.go:53] node "test-preload-719843" has status "Ready":"False"
	I1030 19:23:01.541947  422178 node_ready.go:53] node "test-preload-719843" has status "Ready":"False"
	I1030 19:23:04.040354  422178 node_ready.go:53] node "test-preload-719843" has status "Ready":"False"
	I1030 19:23:05.039885  422178 node_ready.go:49] node "test-preload-719843" has status "Ready":"True"
	I1030 19:23:05.039912  422178 node_ready.go:38] duration metric: took 7.503198578s for node "test-preload-719843" to be "Ready" ...
	I1030 19:23:05.039922  422178 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:23:05.044898  422178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-c6prj" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:05.052750  422178 pod_ready.go:93] pod "coredns-6d4b75cb6d-c6prj" in "kube-system" namespace has status "Ready":"True"
	I1030 19:23:05.052782  422178 pod_ready.go:82] duration metric: took 7.847764ms for pod "coredns-6d4b75cb6d-c6prj" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:05.052795  422178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:06.060845  422178 pod_ready.go:93] pod "etcd-test-preload-719843" in "kube-system" namespace has status "Ready":"True"
	I1030 19:23:06.060873  422178 pod_ready.go:82] duration metric: took 1.0080711s for pod "etcd-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:06.060891  422178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:08.072024  422178 pod_ready.go:103] pod "kube-apiserver-test-preload-719843" in "kube-system" namespace has status "Ready":"False"
	I1030 19:23:09.069619  422178 pod_ready.go:93] pod "kube-apiserver-test-preload-719843" in "kube-system" namespace has status "Ready":"True"
	I1030 19:23:09.069644  422178 pod_ready.go:82] duration metric: took 3.008747393s for pod "kube-apiserver-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:09.069658  422178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:09.073800  422178 pod_ready.go:93] pod "kube-controller-manager-test-preload-719843" in "kube-system" namespace has status "Ready":"True"
	I1030 19:23:09.073818  422178 pod_ready.go:82] duration metric: took 4.153659ms for pod "kube-controller-manager-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:09.073826  422178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5g85l" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:09.078934  422178 pod_ready.go:93] pod "kube-proxy-5g85l" in "kube-system" namespace has status "Ready":"True"
	I1030 19:23:09.078959  422178 pod_ready.go:82] duration metric: took 5.126927ms for pod "kube-proxy-5g85l" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:09.078971  422178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:09.083387  422178 pod_ready.go:93] pod "kube-scheduler-test-preload-719843" in "kube-system" namespace has status "Ready":"True"
	I1030 19:23:09.083405  422178 pod_ready.go:82] duration metric: took 4.426737ms for pod "kube-scheduler-test-preload-719843" in "kube-system" namespace to be "Ready" ...
	I1030 19:23:09.083414  422178 pod_ready.go:39] duration metric: took 4.043481714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:23:09.083429  422178 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:23:09.083470  422178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:23:09.098176  422178 api_server.go:72] duration metric: took 11.74299652s to wait for apiserver process to appear ...
	I1030 19:23:09.098201  422178 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:23:09.098220  422178 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1030 19:23:09.103082  422178 api_server.go:279] https://192.168.39.83:8443/healthz returned 200:
	ok
	I1030 19:23:09.103998  422178 api_server.go:141] control plane version: v1.24.4
	I1030 19:23:09.104037  422178 api_server.go:131] duration metric: took 5.82985ms to wait for apiserver health ...
	I1030 19:23:09.104045  422178 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:23:09.243164  422178 system_pods.go:59] 7 kube-system pods found
	I1030 19:23:09.243197  422178 system_pods.go:61] "coredns-6d4b75cb6d-c6prj" [b270ca58-a405-4023-82b8-9f76efa25660] Running
	I1030 19:23:09.243201  422178 system_pods.go:61] "etcd-test-preload-719843" [54a8e79d-2603-4a3c-b1ff-a7418e9b6a7e] Running
	I1030 19:23:09.243205  422178 system_pods.go:61] "kube-apiserver-test-preload-719843" [bf68b216-3b72-4057-b1ed-26c7e6e3ac81] Running
	I1030 19:23:09.243208  422178 system_pods.go:61] "kube-controller-manager-test-preload-719843" [cc921d26-7692-4029-b522-43643f229687] Running
	I1030 19:23:09.243211  422178 system_pods.go:61] "kube-proxy-5g85l" [4d11f6b7-7d06-4b8f-80cf-5091762be2eb] Running
	I1030 19:23:09.243216  422178 system_pods.go:61] "kube-scheduler-test-preload-719843" [76f2b508-25b4-492f-807a-12c7c259f0dd] Running
	I1030 19:23:09.243219  422178 system_pods.go:61] "storage-provisioner" [824de9e0-a41e-4de9-abc8-ea585cccec33] Running
	I1030 19:23:09.243226  422178 system_pods.go:74] duration metric: took 139.175668ms to wait for pod list to return data ...
	I1030 19:23:09.243233  422178 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:23:09.440649  422178 default_sa.go:45] found service account: "default"
	I1030 19:23:09.440680  422178 default_sa.go:55] duration metric: took 197.439526ms for default service account to be created ...
	I1030 19:23:09.440692  422178 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:23:09.642695  422178 system_pods.go:86] 7 kube-system pods found
	I1030 19:23:09.642726  422178 system_pods.go:89] "coredns-6d4b75cb6d-c6prj" [b270ca58-a405-4023-82b8-9f76efa25660] Running
	I1030 19:23:09.642732  422178 system_pods.go:89] "etcd-test-preload-719843" [54a8e79d-2603-4a3c-b1ff-a7418e9b6a7e] Running
	I1030 19:23:09.642742  422178 system_pods.go:89] "kube-apiserver-test-preload-719843" [bf68b216-3b72-4057-b1ed-26c7e6e3ac81] Running
	I1030 19:23:09.642747  422178 system_pods.go:89] "kube-controller-manager-test-preload-719843" [cc921d26-7692-4029-b522-43643f229687] Running
	I1030 19:23:09.642750  422178 system_pods.go:89] "kube-proxy-5g85l" [4d11f6b7-7d06-4b8f-80cf-5091762be2eb] Running
	I1030 19:23:09.642754  422178 system_pods.go:89] "kube-scheduler-test-preload-719843" [76f2b508-25b4-492f-807a-12c7c259f0dd] Running
	I1030 19:23:09.642757  422178 system_pods.go:89] "storage-provisioner" [824de9e0-a41e-4de9-abc8-ea585cccec33] Running
	I1030 19:23:09.642764  422178 system_pods.go:126] duration metric: took 202.066332ms to wait for k8s-apps to be running ...
	I1030 19:23:09.642772  422178 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:23:09.642821  422178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:23:09.657980  422178 system_svc.go:56] duration metric: took 15.194002ms WaitForService to wait for kubelet
	I1030 19:23:09.658021  422178 kubeadm.go:582] duration metric: took 12.302842172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:23:09.658061  422178 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:23:09.840750  422178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:23:09.840779  422178 node_conditions.go:123] node cpu capacity is 2
	I1030 19:23:09.840795  422178 node_conditions.go:105] duration metric: took 182.723627ms to run NodePressure ...
	I1030 19:23:09.840807  422178 start.go:241] waiting for startup goroutines ...
	I1030 19:23:09.840813  422178 start.go:246] waiting for cluster config update ...
	I1030 19:23:09.840823  422178 start.go:255] writing updated cluster config ...
	I1030 19:23:09.841137  422178 ssh_runner.go:195] Run: rm -f paused
	I1030 19:23:09.890060  422178 start.go:600] kubectl: 1.31.2, cluster: 1.24.4 (minor skew: 7)
	I1030 19:23:09.891885  422178 out.go:201] 
	W1030 19:23:09.893346  422178 out.go:270] ! /usr/local/bin/kubectl is version 1.31.2, which may have incompatibilities with Kubernetes 1.24.4.
	I1030 19:23:09.894644  422178 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1030 19:23:09.896123  422178 out.go:177] * Done! kubectl is now configured to use "test-preload-719843" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.794107711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730316190794089533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd7fdc05-af55-4a13-b5f6-4df3d91f6ec1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.794718234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8cd74f6-2858-425d-a800-f1df34d63a48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.794812572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8cd74f6-2858-425d-a800-f1df34d63a48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.794972050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea4c2a825e1945785a016ecc3547e403f6d25dbf44ed11d75df2bec96aad037,PodSandboxId:4a17698caaf57668b6cd943490e5c1a92f4272e97e0dd2dab084997eac2fd89c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730316183115536143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c6prj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b270ca58-a405-4023-82b8-9f76efa25660,},Annotations:map[string]string{io.kubernetes.container.hash: ab457300,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8546d03fdabdcd87655f2009ed046962a9859f1df912d0a675d5c69b962aebb,PodSandboxId:0b4e8364a6b77f2c82d09a533ff664bf37b940bf3ccbb60358fe385d8b35cb38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730316175989853579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 824de9e0-a41e-4de9-abc8-ea585cccec33,},Annotations:map[string]string{io.kubernetes.container.hash: 606d439e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3843b0bc838eccd0d5ebd396a48aa3443d0bfd59e23e3f8c15743181d93a05de,PodSandboxId:46a4565dcea36df53701f67c6c7368bcb3e89e8bb909a871f823399d6922ae2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730316175679252996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5g85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d
11f6b7-7d06-4b8f-80cf-5091762be2eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcd18b7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20290f849682c69ff8b6f234d0c903d2304a2bb3edf6007b0374390a4dc67bf2,PodSandboxId:3091cf8f422c222dc06ce0a230b7066f65357aad3fb098f54f17cd29e8a8df77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730316170731280528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4027b6e27
2b5658e68d351277803d4fb,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e0431eec7fd953a464fe7d3e0894437bdc2f9ed5e4ca619800775f0b2290e6,PodSandboxId:0d03d297d0025829b29fcb082a937cb4a8967e70cfde7352cc31f5dd73b967d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730316170788372947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1721b213191de8296be175e5f29686,},Annotations:map
[string]string{io.kubernetes.container.hash: da87ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea45582ff664fa473e894db53798de8f852010cdcbdebf61f490a7b6b7218bf,PodSandboxId:be5a4ac4f42a73d98481c195592e7f677cebff3593ec30c44eafb22239d51f19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730316170733142430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26844bab53a0326cc536deee2161eecc,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 276a4be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcf9254f6040f13931a50ccd1e07a3a156647af6bf67395c12d68ba3571a89d,PodSandboxId:073fcdcae3ca8313b2c3600934e1a5ba56ad91f7177b7a3e0b2d118c24591f51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730316170713335083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd7d4bfc7ed873b904773d7a92d511e4,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8cd74f6-2858-425d-a800-f1df34d63a48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.831377703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6486fcc8-9fa9-49e3-84b3-13ccfeefea31 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.831463304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6486fcc8-9fa9-49e3-84b3-13ccfeefea31 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.832511293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b30d3167-7418-4b61-ba2d-1219e42bb6be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.833141401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730316190833119163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b30d3167-7418-4b61-ba2d-1219e42bb6be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.833721362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75652341-24a6-42cd-9ae4-ef56e552842b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.833772098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75652341-24a6-42cd-9ae4-ef56e552842b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.833926893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea4c2a825e1945785a016ecc3547e403f6d25dbf44ed11d75df2bec96aad037,PodSandboxId:4a17698caaf57668b6cd943490e5c1a92f4272e97e0dd2dab084997eac2fd89c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730316183115536143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c6prj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b270ca58-a405-4023-82b8-9f76efa25660,},Annotations:map[string]string{io.kubernetes.container.hash: ab457300,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8546d03fdabdcd87655f2009ed046962a9859f1df912d0a675d5c69b962aebb,PodSandboxId:0b4e8364a6b77f2c82d09a533ff664bf37b940bf3ccbb60358fe385d8b35cb38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730316175989853579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 824de9e0-a41e-4de9-abc8-ea585cccec33,},Annotations:map[string]string{io.kubernetes.container.hash: 606d439e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3843b0bc838eccd0d5ebd396a48aa3443d0bfd59e23e3f8c15743181d93a05de,PodSandboxId:46a4565dcea36df53701f67c6c7368bcb3e89e8bb909a871f823399d6922ae2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730316175679252996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5g85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d
11f6b7-7d06-4b8f-80cf-5091762be2eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcd18b7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20290f849682c69ff8b6f234d0c903d2304a2bb3edf6007b0374390a4dc67bf2,PodSandboxId:3091cf8f422c222dc06ce0a230b7066f65357aad3fb098f54f17cd29e8a8df77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730316170731280528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4027b6e27
2b5658e68d351277803d4fb,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e0431eec7fd953a464fe7d3e0894437bdc2f9ed5e4ca619800775f0b2290e6,PodSandboxId:0d03d297d0025829b29fcb082a937cb4a8967e70cfde7352cc31f5dd73b967d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730316170788372947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1721b213191de8296be175e5f29686,},Annotations:map
[string]string{io.kubernetes.container.hash: da87ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea45582ff664fa473e894db53798de8f852010cdcbdebf61f490a7b6b7218bf,PodSandboxId:be5a4ac4f42a73d98481c195592e7f677cebff3593ec30c44eafb22239d51f19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730316170733142430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26844bab53a0326cc536deee2161eecc,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 276a4be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcf9254f6040f13931a50ccd1e07a3a156647af6bf67395c12d68ba3571a89d,PodSandboxId:073fcdcae3ca8313b2c3600934e1a5ba56ad91f7177b7a3e0b2d118c24591f51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730316170713335083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd7d4bfc7ed873b904773d7a92d511e4,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75652341-24a6-42cd-9ae4-ef56e552842b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.875413204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b763f4b6-d872-407c-b410-10bae3378e53 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.875503997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b763f4b6-d872-407c-b410-10bae3378e53 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.876687997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=340784e6-2a22-48b6-8831-e1260e0b1839 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.877142273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730316190877118228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=340784e6-2a22-48b6-8831-e1260e0b1839 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.878145161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f91644b8-d694-4db0-9ba4-4f75d3ebb822 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.878213879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f91644b8-d694-4db0-9ba4-4f75d3ebb822 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.878407336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea4c2a825e1945785a016ecc3547e403f6d25dbf44ed11d75df2bec96aad037,PodSandboxId:4a17698caaf57668b6cd943490e5c1a92f4272e97e0dd2dab084997eac2fd89c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730316183115536143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c6prj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b270ca58-a405-4023-82b8-9f76efa25660,},Annotations:map[string]string{io.kubernetes.container.hash: ab457300,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8546d03fdabdcd87655f2009ed046962a9859f1df912d0a675d5c69b962aebb,PodSandboxId:0b4e8364a6b77f2c82d09a533ff664bf37b940bf3ccbb60358fe385d8b35cb38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730316175989853579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 824de9e0-a41e-4de9-abc8-ea585cccec33,},Annotations:map[string]string{io.kubernetes.container.hash: 606d439e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3843b0bc838eccd0d5ebd396a48aa3443d0bfd59e23e3f8c15743181d93a05de,PodSandboxId:46a4565dcea36df53701f67c6c7368bcb3e89e8bb909a871f823399d6922ae2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730316175679252996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5g85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d
11f6b7-7d06-4b8f-80cf-5091762be2eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcd18b7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20290f849682c69ff8b6f234d0c903d2304a2bb3edf6007b0374390a4dc67bf2,PodSandboxId:3091cf8f422c222dc06ce0a230b7066f65357aad3fb098f54f17cd29e8a8df77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730316170731280528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4027b6e27
2b5658e68d351277803d4fb,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e0431eec7fd953a464fe7d3e0894437bdc2f9ed5e4ca619800775f0b2290e6,PodSandboxId:0d03d297d0025829b29fcb082a937cb4a8967e70cfde7352cc31f5dd73b967d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730316170788372947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1721b213191de8296be175e5f29686,},Annotations:map
[string]string{io.kubernetes.container.hash: da87ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea45582ff664fa473e894db53798de8f852010cdcbdebf61f490a7b6b7218bf,PodSandboxId:be5a4ac4f42a73d98481c195592e7f677cebff3593ec30c44eafb22239d51f19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730316170733142430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26844bab53a0326cc536deee2161eecc,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 276a4be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcf9254f6040f13931a50ccd1e07a3a156647af6bf67395c12d68ba3571a89d,PodSandboxId:073fcdcae3ca8313b2c3600934e1a5ba56ad91f7177b7a3e0b2d118c24591f51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730316170713335083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd7d4bfc7ed873b904773d7a92d511e4,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f91644b8-d694-4db0-9ba4-4f75d3ebb822 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.912373710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ea3f75d-d099-42d3-abfa-aee6fb08b3d3 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.912443145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ea3f75d-d099-42d3-abfa-aee6fb08b3d3 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.913329257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec3de134-e31c-4863-811f-f4a5ccd59f23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.913907585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730316190913885522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec3de134-e31c-4863-811f-f4a5ccd59f23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.914592366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ec348ea-398a-4463-a039-afd86e8279ee name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.914662626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ec348ea-398a-4463-a039-afd86e8279ee name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:23:10 test-preload-719843 crio[663]: time="2024-10-30 19:23:10.914817846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea4c2a825e1945785a016ecc3547e403f6d25dbf44ed11d75df2bec96aad037,PodSandboxId:4a17698caaf57668b6cd943490e5c1a92f4272e97e0dd2dab084997eac2fd89c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730316183115536143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c6prj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b270ca58-a405-4023-82b8-9f76efa25660,},Annotations:map[string]string{io.kubernetes.container.hash: ab457300,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8546d03fdabdcd87655f2009ed046962a9859f1df912d0a675d5c69b962aebb,PodSandboxId:0b4e8364a6b77f2c82d09a533ff664bf37b940bf3ccbb60358fe385d8b35cb38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730316175989853579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 824de9e0-a41e-4de9-abc8-ea585cccec33,},Annotations:map[string]string{io.kubernetes.container.hash: 606d439e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3843b0bc838eccd0d5ebd396a48aa3443d0bfd59e23e3f8c15743181d93a05de,PodSandboxId:46a4565dcea36df53701f67c6c7368bcb3e89e8bb909a871f823399d6922ae2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730316175679252996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5g85l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d
11f6b7-7d06-4b8f-80cf-5091762be2eb,},Annotations:map[string]string{io.kubernetes.container.hash: bcd18b7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20290f849682c69ff8b6f234d0c903d2304a2bb3edf6007b0374390a4dc67bf2,PodSandboxId:3091cf8f422c222dc06ce0a230b7066f65357aad3fb098f54f17cd29e8a8df77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730316170731280528,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4027b6e27
2b5658e68d351277803d4fb,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e0431eec7fd953a464fe7d3e0894437bdc2f9ed5e4ca619800775f0b2290e6,PodSandboxId:0d03d297d0025829b29fcb082a937cb4a8967e70cfde7352cc31f5dd73b967d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730316170788372947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1721b213191de8296be175e5f29686,},Annotations:map
[string]string{io.kubernetes.container.hash: da87ed4f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea45582ff664fa473e894db53798de8f852010cdcbdebf61f490a7b6b7218bf,PodSandboxId:be5a4ac4f42a73d98481c195592e7f677cebff3593ec30c44eafb22239d51f19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730316170733142430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26844bab53a0326cc536deee2161eecc,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 276a4be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdcf9254f6040f13931a50ccd1e07a3a156647af6bf67395c12d68ba3571a89d,PodSandboxId:073fcdcae3ca8313b2c3600934e1a5ba56ad91f7177b7a3e0b2d118c24591f51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730316170713335083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-719843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd7d4bfc7ed873b904773d7a92d511e4,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ec348ea-398a-4463-a039-afd86e8279ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dea4c2a825e19       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   4a17698caaf57       coredns-6d4b75cb6d-c6prj
	a8546d03fdabd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   0b4e8364a6b77       storage-provisioner
	3843b0bc838ec       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   46a4565dcea36       kube-proxy-5g85l
	87e0431eec7fd       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   0d03d297d0025       etcd-test-preload-719843
	cea45582ff664       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   be5a4ac4f42a7       kube-apiserver-test-preload-719843
	20290f849682c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   3091cf8f422c2       kube-scheduler-test-preload-719843
	fdcf9254f6040       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   073fcdcae3ca8       kube-controller-manager-test-preload-719843
	
	
	==> coredns [dea4c2a825e1945785a016ecc3547e403f6d25dbf44ed11d75df2bec96aad037] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:56473 - 41206 "HINFO IN 1024272706137852946.2823509643440747057. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012495684s
	
	
	==> describe nodes <==
	Name:               test-preload-719843
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-719843
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=test-preload-719843
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T19_20_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 19:20:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-719843
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 19:23:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 19:23:04 +0000   Wed, 30 Oct 2024 19:20:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 19:23:04 +0000   Wed, 30 Oct 2024 19:20:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 19:23:04 +0000   Wed, 30 Oct 2024 19:20:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 19:23:04 +0000   Wed, 30 Oct 2024 19:23:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    test-preload-719843
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9551beedf30245429f5ee890a154999e
	  System UUID:                9551beed-f302-4542-9f5e-e890a154999e
	  Boot ID:                    9efa64ee-887b-4f1d-9c94-4a58b6ae629c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-c6prj                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m12s
	  kube-system                 etcd-test-preload-719843                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m26s
	  kube-system                 kube-apiserver-test-preload-719843             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-test-preload-719843    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-5g85l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-test-preload-719843             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  Starting                 15s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  2m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m33s (x5 over 2m34s)  kubelet          Node test-preload-719843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m33s (x4 over 2m34s)  kubelet          Node test-preload-719843 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m33s (x5 over 2m34s)  kubelet          Node test-preload-719843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s                  kubelet          Node test-preload-719843 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m26s                  kubelet          Node test-preload-719843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s                  kubelet          Node test-preload-719843 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                2m15s                  kubelet          Node test-preload-719843 status is now: NodeReady
	  Normal  RegisteredNode           2m12s                  node-controller  Node test-preload-719843 event: Registered Node test-preload-719843 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)      kubelet          Node test-preload-719843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)      kubelet          Node test-preload-719843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)      kubelet          Node test-preload-719843 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                     node-controller  Node test-preload-719843 event: Registered Node test-preload-719843 in Controller
	
	
	==> dmesg <==
	[Oct30 19:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050148] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039024] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858411] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574374] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +2.461525] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.941144] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.060344] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073324] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.192394] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.120886] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.270227] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +12.570913] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.054699] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.309297] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +5.513677] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.135008] systemd-fstab-generator[1751]: Ignoring "noauto" option for root device
	[Oct30 19:23] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [87e0431eec7fd953a464fe7d3e0894437bdc2f9ed5e4ca619800775f0b2290e6] <==
	{"level":"info","ts":"2024-10-30T19:22:51.172Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"4841372cab76acee","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-30T19:22:51.172Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-30T19:22:51.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4841372cab76acee switched to configuration voters=(5206503309211774190)"}
	{"level":"info","ts":"2024-10-30T19:22:51.176Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4d98984c2209a4ae","local-member-id":"4841372cab76acee","added-peer-id":"4841372cab76acee","added-peer-peer-urls":["https://192.168.39.83:2380"]}
	{"level":"info","ts":"2024-10-30T19:22:51.176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4d98984c2209a4ae","local-member-id":"4841372cab76acee","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T19:22:51.179Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T19:22:51.179Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.83:2380"}
	{"level":"info","ts":"2024-10-30T19:22:51.179Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.83:2380"}
	{"level":"info","ts":"2024-10-30T19:22:51.179Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-30T19:22:51.183Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4841372cab76acee","initial-advertise-peer-urls":["https://192.168.39.83:2380"],"listen-peer-urls":["https://192.168.39.83:2380"],"advertise-client-urls":["https://192.168.39.83:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.83:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-30T19:22:51.183Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-30T19:22:52.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4841372cab76acee is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-30T19:22:52.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4841372cab76acee became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-30T19:22:52.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4841372cab76acee received MsgPreVoteResp from 4841372cab76acee at term 2"}
	{"level":"info","ts":"2024-10-30T19:22:52.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4841372cab76acee became candidate at term 3"}
	{"level":"info","ts":"2024-10-30T19:22:52.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4841372cab76acee received MsgVoteResp from 4841372cab76acee at term 3"}
	{"level":"info","ts":"2024-10-30T19:22:52.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4841372cab76acee became leader at term 3"}
	{"level":"info","ts":"2024-10-30T19:22:52.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4841372cab76acee elected leader 4841372cab76acee at term 3"}
	{"level":"info","ts":"2024-10-30T19:22:52.131Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"4841372cab76acee","local-member-attributes":"{Name:test-preload-719843 ClientURLs:[https://192.168.39.83:2379]}","request-path":"/0/members/4841372cab76acee/attributes","cluster-id":"4d98984c2209a4ae","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-30T19:22:52.131Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:22:52.136Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:22:52.137Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-30T19:22:52.139Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.83:2379"}
	{"level":"info","ts":"2024-10-30T19:22:52.142Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-30T19:22:52.142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:23:11 up 0 min,  0 users,  load average: 0.81, 0.25, 0.09
	Linux test-preload-719843 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cea45582ff664fa473e894db53798de8f852010cdcbdebf61f490a7b6b7218bf] <==
	I1030 19:22:54.632892       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1030 19:22:54.632994       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1030 19:22:54.633264       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1030 19:22:54.633322       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1030 19:22:54.678517       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1030 19:22:54.678661       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1030 19:22:54.768174       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1030 19:22:54.778672       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1030 19:22:54.782039       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E1030 19:22:54.792900       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1030 19:22:54.811815       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1030 19:22:54.825074       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1030 19:22:54.825182       1 cache.go:39] Caches are synced for autoregister controller
	I1030 19:22:54.825403       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1030 19:22:54.827975       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1030 19:22:55.317935       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1030 19:22:55.634920       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1030 19:22:56.153526       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1030 19:22:56.268270       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1030 19:22:56.283796       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1030 19:22:56.323073       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1030 19:22:56.343037       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1030 19:22:56.352018       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 19:23:07.106644       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 19:23:07.220912       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fdcf9254f6040f13931a50ccd1e07a3a156647af6bf67395c12d68ba3571a89d] <==
	I1030 19:23:07.113185       1 shared_informer.go:262] Caches are synced for GC
	I1030 19:23:07.114331       1 shared_informer.go:262] Caches are synced for TTL
	I1030 19:23:07.116656       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1030 19:23:07.121389       1 shared_informer.go:262] Caches are synced for PVC protection
	I1030 19:23:07.124484       1 shared_informer.go:262] Caches are synced for deployment
	I1030 19:23:07.127614       1 shared_informer.go:262] Caches are synced for taint
	I1030 19:23:07.127728       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1030 19:23:07.127895       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-719843. Assuming now as a timestamp.
	I1030 19:23:07.127951       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1030 19:23:07.128973       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1030 19:23:07.129181       1 event.go:294] "Event occurred" object="test-preload-719843" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-719843 event: Registered Node test-preload-719843 in Controller"
	I1030 19:23:07.133334       1 shared_informer.go:262] Caches are synced for persistent volume
	I1030 19:23:07.136737       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1030 19:23:07.136990       1 shared_informer.go:262] Caches are synced for namespace
	I1030 19:23:07.143668       1 shared_informer.go:262] Caches are synced for attach detach
	I1030 19:23:07.209459       1 shared_informer.go:262] Caches are synced for endpoint
	I1030 19:23:07.212846       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1030 19:23:07.256929       1 shared_informer.go:262] Caches are synced for HPA
	I1030 19:23:07.270302       1 shared_informer.go:262] Caches are synced for disruption
	I1030 19:23:07.270452       1 disruption.go:371] Sending events to api server.
	I1030 19:23:07.270356       1 shared_informer.go:262] Caches are synced for resource quota
	I1030 19:23:07.300392       1 shared_informer.go:262] Caches are synced for resource quota
	I1030 19:23:07.742771       1 shared_informer.go:262] Caches are synced for garbage collector
	I1030 19:23:07.742870       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1030 19:23:07.744027       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [3843b0bc838eccd0d5ebd396a48aa3443d0bfd59e23e3f8c15743181d93a05de] <==
	I1030 19:22:56.054459       1 node.go:163] Successfully retrieved node IP: 192.168.39.83
	I1030 19:22:56.054629       1 server_others.go:138] "Detected node IP" address="192.168.39.83"
	I1030 19:22:56.054696       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1030 19:22:56.120123       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1030 19:22:56.120161       1 server_others.go:206] "Using iptables Proxier"
	I1030 19:22:56.120199       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1030 19:22:56.120462       1 server.go:661] "Version info" version="v1.24.4"
	I1030 19:22:56.120491       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:22:56.126084       1 config.go:317] "Starting service config controller"
	I1030 19:22:56.137040       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1030 19:22:56.127364       1 config.go:226] "Starting endpoint slice config controller"
	I1030 19:22:56.137324       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1030 19:22:56.128936       1 config.go:444] "Starting node config controller"
	I1030 19:22:56.137412       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1030 19:22:56.237601       1 shared_informer.go:262] Caches are synced for node config
	I1030 19:22:56.237628       1 shared_informer.go:262] Caches are synced for service config
	I1030 19:22:56.237649       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [20290f849682c69ff8b6f234d0c903d2304a2bb3edf6007b0374390a4dc67bf2] <==
	I1030 19:22:51.447607       1 serving.go:348] Generated self-signed cert in-memory
	I1030 19:22:54.819983       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1030 19:22:54.820173       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:22:54.835727       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1030 19:22:54.835932       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1030 19:22:54.836244       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1030 19:22:54.836349       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1030 19:22:54.841143       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1030 19:22:54.841241       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 19:22:54.841325       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1030 19:22:54.841380       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1030 19:22:54.936758       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I1030 19:22:54.942133       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1030 19:22:54.942143       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 19:22:54 test-preload-719843 kubelet[1127]: I1030 19:22:54.966651    1127 apiserver.go:52] "Watching apiserver"
	Oct 30 19:22:54 test-preload-719843 kubelet[1127]: I1030 19:22:54.970099    1127 topology_manager.go:200] "Topology Admit Handler"
	Oct 30 19:22:54 test-preload-719843 kubelet[1127]: I1030 19:22:54.970196    1127 topology_manager.go:200] "Topology Admit Handler"
	Oct 30 19:22:54 test-preload-719843 kubelet[1127]: I1030 19:22:54.970235    1127 topology_manager.go:200] "Topology Admit Handler"
	Oct 30 19:22:54 test-preload-719843 kubelet[1127]: E1030 19:22:54.971914    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-c6prj" podUID=b270ca58-a405-4023-82b8-9f76efa25660
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034253    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume\") pod \"coredns-6d4b75cb6d-c6prj\" (UID: \"b270ca58-a405-4023-82b8-9f76efa25660\") " pod="kube-system/coredns-6d4b75cb6d-c6prj"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034311    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d11f6b7-7d06-4b8f-80cf-5091762be2eb-xtables-lock\") pod \"kube-proxy-5g85l\" (UID: \"4d11f6b7-7d06-4b8f-80cf-5091762be2eb\") " pod="kube-system/kube-proxy-5g85l"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034335    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvdg\" (UniqueName: \"kubernetes.io/projected/b270ca58-a405-4023-82b8-9f76efa25660-kube-api-access-ckvdg\") pod \"coredns-6d4b75cb6d-c6prj\" (UID: \"b270ca58-a405-4023-82b8-9f76efa25660\") " pod="kube-system/coredns-6d4b75cb6d-c6prj"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034355    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d11f6b7-7d06-4b8f-80cf-5091762be2eb-lib-modules\") pod \"kube-proxy-5g85l\" (UID: \"4d11f6b7-7d06-4b8f-80cf-5091762be2eb\") " pod="kube-system/kube-proxy-5g85l"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034372    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2b9\" (UniqueName: \"kubernetes.io/projected/4d11f6b7-7d06-4b8f-80cf-5091762be2eb-kube-api-access-7g2b9\") pod \"kube-proxy-5g85l\" (UID: \"4d11f6b7-7d06-4b8f-80cf-5091762be2eb\") " pod="kube-system/kube-proxy-5g85l"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034390    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d11f6b7-7d06-4b8f-80cf-5091762be2eb-kube-proxy\") pod \"kube-proxy-5g85l\" (UID: \"4d11f6b7-7d06-4b8f-80cf-5091762be2eb\") " pod="kube-system/kube-proxy-5g85l"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034406    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/824de9e0-a41e-4de9-abc8-ea585cccec33-tmp\") pod \"storage-provisioner\" (UID: \"824de9e0-a41e-4de9-abc8-ea585cccec33\") " pod="kube-system/storage-provisioner"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034437    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ds9\" (UniqueName: \"kubernetes.io/projected/824de9e0-a41e-4de9-abc8-ea585cccec33-kube-api-access-l4ds9\") pod \"storage-provisioner\" (UID: \"824de9e0-a41e-4de9-abc8-ea585cccec33\") " pod="kube-system/storage-provisioner"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: I1030 19:22:55.034455    1127 reconciler.go:159] "Reconciler: start to sync state"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: E1030 19:22:55.049012    1127 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: E1030 19:22:55.138329    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: E1030 19:22:55.138658    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume podName:b270ca58-a405-4023-82b8-9f76efa25660 nodeName:}" failed. No retries permitted until 2024-10-30 19:22:55.6386245 +0000 UTC m=+5.802795544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume") pod "coredns-6d4b75cb6d-c6prj" (UID: "b270ca58-a405-4023-82b8-9f76efa25660") : object "kube-system"/"coredns" not registered
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: E1030 19:22:55.641003    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 30 19:22:55 test-preload-719843 kubelet[1127]: E1030 19:22:55.641085    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume podName:b270ca58-a405-4023-82b8-9f76efa25660 nodeName:}" failed. No retries permitted until 2024-10-30 19:22:56.641069206 +0000 UTC m=+6.805240243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume") pod "coredns-6d4b75cb6d-c6prj" (UID: "b270ca58-a405-4023-82b8-9f76efa25660") : object "kube-system"/"coredns" not registered
	Oct 30 19:22:56 test-preload-719843 kubelet[1127]: E1030 19:22:56.650400    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 30 19:22:56 test-preload-719843 kubelet[1127]: E1030 19:22:56.650508    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume podName:b270ca58-a405-4023-82b8-9f76efa25660 nodeName:}" failed. No retries permitted until 2024-10-30 19:22:58.650491687 +0000 UTC m=+8.814662725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume") pod "coredns-6d4b75cb6d-c6prj" (UID: "b270ca58-a405-4023-82b8-9f76efa25660") : object "kube-system"/"coredns" not registered
	Oct 30 19:22:57 test-preload-719843 kubelet[1127]: E1030 19:22:57.091343    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-c6prj" podUID=b270ca58-a405-4023-82b8-9f76efa25660
	Oct 30 19:22:58 test-preload-719843 kubelet[1127]: E1030 19:22:58.667510    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 30 19:22:58 test-preload-719843 kubelet[1127]: E1030 19:22:58.667688    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume podName:b270ca58-a405-4023-82b8-9f76efa25660 nodeName:}" failed. No retries permitted until 2024-10-30 19:23:02.667659617 +0000 UTC m=+12.831830655 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b270ca58-a405-4023-82b8-9f76efa25660-config-volume") pod "coredns-6d4b75cb6d-c6prj" (UID: "b270ca58-a405-4023-82b8-9f76efa25660") : object "kube-system"/"coredns" not registered
	Oct 30 19:22:59 test-preload-719843 kubelet[1127]: E1030 19:22:59.091624    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-c6prj" podUID=b270ca58-a405-4023-82b8-9f76efa25660
	
	
	==> storage-provisioner [a8546d03fdabdcd87655f2009ed046962a9859f1df912d0a675d5c69b962aebb] <==
	I1030 19:22:56.100066       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-719843 -n test-preload-719843
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-719843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-719843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-719843
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-719843: (1.173305414s)
--- FAIL: TestPreload (240.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (432.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1030 19:28:17.243820  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m54.938223418s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-831845] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-831845" primary control-plane node in "kubernetes-upgrade-831845" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:28:03.491621  428523 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:28:03.491978  428523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:28:03.492012  428523 out.go:358] Setting ErrFile to fd 2...
	I1030 19:28:03.492029  428523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:28:03.492345  428523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:28:03.493166  428523 out.go:352] Setting JSON to false
	I1030 19:28:03.494688  428523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11426,"bootTime":1730305057,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:28:03.494866  428523 start.go:139] virtualization: kvm guest
	I1030 19:28:03.497266  428523 out.go:177] * [kubernetes-upgrade-831845] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:28:03.499098  428523 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:28:03.499111  428523 notify.go:220] Checking for updates...
	I1030 19:28:03.500427  428523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:28:03.501825  428523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:28:03.503121  428523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:28:03.506627  428523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:28:03.508165  428523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:28:03.510162  428523 config.go:182] Loaded profile config "NoKubernetes-820435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1030 19:28:03.510318  428523 config.go:182] Loaded profile config "cert-expiration-910187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:28:03.510470  428523 config.go:182] Loaded profile config "stopped-upgrade-531202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1030 19:28:03.510638  428523 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:28:03.555916  428523 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 19:28:03.557185  428523 start.go:297] selected driver: kvm2
	I1030 19:28:03.557206  428523 start.go:901] validating driver "kvm2" against <nil>
	I1030 19:28:03.557222  428523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:28:03.558277  428523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:28:03.558371  428523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:28:03.577191  428523 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:28:03.577260  428523 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 19:28:03.577616  428523 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 19:28:03.577660  428523 cni.go:84] Creating CNI manager for ""
	I1030 19:28:03.577728  428523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:28:03.577739  428523 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 19:28:03.577813  428523 start.go:340] cluster config:
	{Name:kubernetes-upgrade-831845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:28:03.577949  428523 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:28:03.579828  428523 out.go:177] * Starting "kubernetes-upgrade-831845" primary control-plane node in "kubernetes-upgrade-831845" cluster
	I1030 19:28:03.581229  428523 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:28:03.581299  428523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:28:03.581313  428523 cache.go:56] Caching tarball of preloaded images
	I1030 19:28:03.581402  428523 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:28:03.581417  428523 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:28:03.581541  428523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/config.json ...
	I1030 19:28:03.581564  428523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/config.json: {Name:mk22c5d41e2e3f22697317401e3d0bc6583a1adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:28:03.581742  428523 start.go:360] acquireMachinesLock for kubernetes-upgrade-831845: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:28:24.472603  428523 start.go:364] duration metric: took 20.89081254s to acquireMachinesLock for "kubernetes-upgrade-831845"
	I1030 19:28:24.472674  428523 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-831845 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:28:24.472795  428523 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 19:28:24.475812  428523 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 19:28:24.476000  428523 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:28:24.476056  428523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:28:24.496444  428523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43401
	I1030 19:28:24.496963  428523 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:28:24.497560  428523 main.go:141] libmachine: Using API Version  1
	I1030 19:28:24.497586  428523 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:28:24.497930  428523 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:28:24.498146  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetMachineName
	I1030 19:28:24.498301  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:24.498473  428523 start.go:159] libmachine.API.Create for "kubernetes-upgrade-831845" (driver="kvm2")
	I1030 19:28:24.498520  428523 client.go:168] LocalClient.Create starting
	I1030 19:28:24.498556  428523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 19:28:24.498591  428523 main.go:141] libmachine: Decoding PEM data...
	I1030 19:28:24.498616  428523 main.go:141] libmachine: Parsing certificate...
	I1030 19:28:24.498684  428523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 19:28:24.498709  428523 main.go:141] libmachine: Decoding PEM data...
	I1030 19:28:24.498725  428523 main.go:141] libmachine: Parsing certificate...
	I1030 19:28:24.498747  428523 main.go:141] libmachine: Running pre-create checks...
	I1030 19:28:24.498759  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .PreCreateCheck
	I1030 19:28:24.499093  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetConfigRaw
	I1030 19:28:24.499554  428523 main.go:141] libmachine: Creating machine...
	I1030 19:28:24.499573  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .Create
	I1030 19:28:24.499745  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Creating KVM machine...
	I1030 19:28:24.500970  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found existing default KVM network
	I1030 19:28:24.502567  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:24.502352  428757 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:dc:c4} reservation:<nil>}
	I1030 19:28:24.504069  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:24.503986  428757 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ba0c0}
	I1030 19:28:24.504101  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | created network xml: 
	I1030 19:28:24.504111  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | <network>
	I1030 19:28:24.504120  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |   <name>mk-kubernetes-upgrade-831845</name>
	I1030 19:28:24.504132  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |   <dns enable='no'/>
	I1030 19:28:24.504144  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |   
	I1030 19:28:24.504154  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1030 19:28:24.504167  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |     <dhcp>
	I1030 19:28:24.504177  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1030 19:28:24.504189  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |     </dhcp>
	I1030 19:28:24.504199  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |   </ip>
	I1030 19:28:24.504207  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG |   
	I1030 19:28:24.504215  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | </network>
	I1030 19:28:24.504224  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | 
	I1030 19:28:24.510140  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | trying to create private KVM network mk-kubernetes-upgrade-831845 192.168.50.0/24...
	I1030 19:28:24.582883  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | private KVM network mk-kubernetes-upgrade-831845 192.168.50.0/24 created
	I1030 19:28:24.583076  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845 ...
	I1030 19:28:24.583105  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 19:28:24.583120  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:24.583029  428757 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:28:24.583251  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 19:28:24.911286  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:24.911112  428757 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa...
	I1030 19:28:25.060854  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:25.060686  428757 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/kubernetes-upgrade-831845.rawdisk...
	I1030 19:28:25.060890  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Writing magic tar header
	I1030 19:28:25.060912  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Writing SSH key tar header
	I1030 19:28:25.060926  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:25.060820  428757 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845 ...
	I1030 19:28:25.060942  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845
	I1030 19:28:25.060983  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845 (perms=drwx------)
	I1030 19:28:25.061008  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 19:28:25.061020  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 19:28:25.061034  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:28:25.061041  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 19:28:25.061049  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 19:28:25.061054  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Checking permissions on dir: /home/jenkins
	I1030 19:28:25.061064  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Checking permissions on dir: /home
	I1030 19:28:25.061075  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Skipping /home - not owner
	I1030 19:28:25.061090  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 19:28:25.061105  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 19:28:25.061113  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 19:28:25.061120  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 19:28:25.061126  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Creating domain...
	I1030 19:28:25.062316  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) define libvirt domain using xml: 
	I1030 19:28:25.062344  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) <domain type='kvm'>
	I1030 19:28:25.062356  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   <name>kubernetes-upgrade-831845</name>
	I1030 19:28:25.062375  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   <memory unit='MiB'>2200</memory>
	I1030 19:28:25.062387  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   <vcpu>2</vcpu>
	I1030 19:28:25.062403  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   <features>
	I1030 19:28:25.062411  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <acpi/>
	I1030 19:28:25.062416  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <apic/>
	I1030 19:28:25.062427  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <pae/>
	I1030 19:28:25.062434  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     
	I1030 19:28:25.062439  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   </features>
	I1030 19:28:25.062446  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   <cpu mode='host-passthrough'>
	I1030 19:28:25.062452  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   
	I1030 19:28:25.062461  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   </cpu>
	I1030 19:28:25.062507  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   <os>
	I1030 19:28:25.062537  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <type>hvm</type>
	I1030 19:28:25.062554  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <boot dev='cdrom'/>
	I1030 19:28:25.062566  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <boot dev='hd'/>
	I1030 19:28:25.062598  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <bootmenu enable='no'/>
	I1030 19:28:25.062623  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   </os>
	I1030 19:28:25.062636  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   <devices>
	I1030 19:28:25.062644  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <disk type='file' device='cdrom'>
	I1030 19:28:25.062661  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/boot2docker.iso'/>
	I1030 19:28:25.062672  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <target dev='hdc' bus='scsi'/>
	I1030 19:28:25.062684  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <readonly/>
	I1030 19:28:25.062694  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     </disk>
	I1030 19:28:25.062709  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <disk type='file' device='disk'>
	I1030 19:28:25.062728  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 19:28:25.062752  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/kubernetes-upgrade-831845.rawdisk'/>
	I1030 19:28:25.062764  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <target dev='hda' bus='virtio'/>
	I1030 19:28:25.062775  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     </disk>
	I1030 19:28:25.062795  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <interface type='network'>
	I1030 19:28:25.062812  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <source network='mk-kubernetes-upgrade-831845'/>
	I1030 19:28:25.062825  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <model type='virtio'/>
	I1030 19:28:25.062836  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     </interface>
	I1030 19:28:25.062849  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <interface type='network'>
	I1030 19:28:25.062859  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <source network='default'/>
	I1030 19:28:25.062868  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <model type='virtio'/>
	I1030 19:28:25.062877  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     </interface>
	I1030 19:28:25.062887  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <serial type='pty'>
	I1030 19:28:25.062896  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <target port='0'/>
	I1030 19:28:25.062904  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     </serial>
	I1030 19:28:25.062914  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <console type='pty'>
	I1030 19:28:25.062924  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <target type='serial' port='0'/>
	I1030 19:28:25.062937  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     </console>
	I1030 19:28:25.062960  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     <rng model='virtio'>
	I1030 19:28:25.062977  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)       <backend model='random'>/dev/random</backend>
	I1030 19:28:25.063007  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     </rng>
	I1030 19:28:25.063024  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     
	I1030 19:28:25.063038  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)     
	I1030 19:28:25.063052  428523 main.go:141] libmachine: (kubernetes-upgrade-831845)   </devices>
	I1030 19:28:25.063061  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) </domain>
	I1030 19:28:25.063070  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) 
	I1030 19:28:25.067412  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:51:6c:d5 in network default
	I1030 19:28:25.068223  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Ensuring networks are active...
	I1030 19:28:25.068245  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:25.069087  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Ensuring network default is active
	I1030 19:28:25.069375  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Ensuring network mk-kubernetes-upgrade-831845 is active
	I1030 19:28:25.069990  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Getting domain xml...
	I1030 19:28:25.070799  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Creating domain...
	I1030 19:28:26.372785  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Waiting to get IP...
	I1030 19:28:26.373760  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:26.374371  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:26.374438  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:26.374334  428757 retry.go:31] will retry after 293.57743ms: waiting for machine to come up
	I1030 19:28:26.670010  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:26.670580  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:26.670609  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:26.670544  428757 retry.go:31] will retry after 333.469129ms: waiting for machine to come up
	I1030 19:28:27.006003  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:27.006519  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:27.006551  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:27.006450  428757 retry.go:31] will retry after 473.812816ms: waiting for machine to come up
	I1030 19:28:27.482037  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:27.482523  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:27.482555  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:27.482467  428757 retry.go:31] will retry after 529.649941ms: waiting for machine to come up
	I1030 19:28:28.014417  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:28.014944  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:28.014970  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:28.014895  428757 retry.go:31] will retry after 749.045935ms: waiting for machine to come up
	I1030 19:28:28.765225  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:28.765655  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:28.765675  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:28.765626  428757 retry.go:31] will retry after 947.27934ms: waiting for machine to come up
	I1030 19:28:29.714564  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:29.715002  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:29.715027  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:29.714966  428757 retry.go:31] will retry after 870.691388ms: waiting for machine to come up
	I1030 19:28:30.587186  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:30.587721  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:30.587747  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:30.587592  428757 retry.go:31] will retry after 1.177334244s: waiting for machine to come up
	I1030 19:28:31.766572  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:31.766987  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:31.767018  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:31.766919  428757 retry.go:31] will retry after 1.284413598s: waiting for machine to come up
	I1030 19:28:33.053294  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:33.053657  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:33.053682  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:33.053600  428757 retry.go:31] will retry after 1.733480168s: waiting for machine to come up
	I1030 19:28:34.789256  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:34.789839  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:34.789871  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:34.789772  428757 retry.go:31] will retry after 2.789574152s: waiting for machine to come up
	I1030 19:28:37.582797  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:37.583227  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:37.583252  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:37.583195  428757 retry.go:31] will retry after 2.903828378s: waiting for machine to come up
	I1030 19:28:40.489018  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:40.489545  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:40.489572  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:40.489497  428757 retry.go:31] will retry after 2.731204121s: waiting for machine to come up
	I1030 19:28:43.224289  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:43.224761  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find current IP address of domain kubernetes-upgrade-831845 in network mk-kubernetes-upgrade-831845
	I1030 19:28:43.224789  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | I1030 19:28:43.224715  428757 retry.go:31] will retry after 4.484225454s: waiting for machine to come up
	I1030 19:28:47.712930  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:47.713306  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Found IP for machine: 192.168.50.90
	I1030 19:28:47.713356  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has current primary IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:47.713366  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Reserving static IP address...
	I1030 19:28:47.713639  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-831845", mac: "52:54:00:52:0a:2e", ip: "192.168.50.90"} in network mk-kubernetes-upgrade-831845
	I1030 19:28:47.787140  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Reserved static IP address: 192.168.50.90
	I1030 19:28:47.787174  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Waiting for SSH to be available...
	I1030 19:28:47.787183  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Getting to WaitForSSH function...
	I1030 19:28:47.789936  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:47.790349  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845
	I1030 19:28:47.790383  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-831845 interface with MAC address 52:54:00:52:0a:2e
	I1030 19:28:47.790513  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Using SSH client type: external
	I1030 19:28:47.790552  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa (-rw-------)
	I1030 19:28:47.790590  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:28:47.790613  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | About to run SSH command:
	I1030 19:28:47.790628  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | exit 0
	I1030 19:28:47.794335  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | SSH cmd err, output: exit status 255: 
	I1030 19:28:47.794360  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1030 19:28:47.794370  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | command : exit 0
	I1030 19:28:47.794378  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | err     : exit status 255
	I1030 19:28:47.794388  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | output  : 
	I1030 19:28:50.794668  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Getting to WaitForSSH function...
	I1030 19:28:50.796893  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:50.797203  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:50.797249  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:50.797400  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Using SSH client type: external
	I1030 19:28:50.797418  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa (-rw-------)
	I1030 19:28:50.797493  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:28:50.797532  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | About to run SSH command:
	I1030 19:28:50.797558  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | exit 0
	I1030 19:28:50.922640  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | SSH cmd err, output: <nil>: 
	I1030 19:28:50.922968  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) KVM machine creation complete!
	I1030 19:28:50.923302  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetConfigRaw
	I1030 19:28:50.923847  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:50.924051  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:50.924198  428523 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 19:28:50.924213  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetState
	I1030 19:28:50.925409  428523 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 19:28:50.925422  428523 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 19:28:50.925428  428523 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 19:28:50.925434  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:50.927949  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:50.928268  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:50.928296  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:50.928391  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:50.928580  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:50.928722  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:50.928865  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:50.929046  428523 main.go:141] libmachine: Using SSH client type: native
	I1030 19:28:50.929251  428523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.90 22 <nil> <nil>}
	I1030 19:28:50.929262  428523 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 19:28:51.033577  428523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:28:51.033603  428523 main.go:141] libmachine: Detecting the provisioner...
	I1030 19:28:51.033611  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:51.036188  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.036621  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.036643  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.036838  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:51.037039  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.037223  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.037384  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:51.037517  428523 main.go:141] libmachine: Using SSH client type: native
	I1030 19:28:51.037716  428523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.90 22 <nil> <nil>}
	I1030 19:28:51.037731  428523 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 19:28:51.143157  428523 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 19:28:51.143258  428523 main.go:141] libmachine: found compatible host: buildroot
	I1030 19:28:51.143271  428523 main.go:141] libmachine: Provisioning with buildroot...
	I1030 19:28:51.143279  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetMachineName
	I1030 19:28:51.143545  428523 buildroot.go:166] provisioning hostname "kubernetes-upgrade-831845"
	I1030 19:28:51.143576  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetMachineName
	I1030 19:28:51.143771  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:51.146306  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.146738  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.146774  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.146925  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:51.147102  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.147253  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.147382  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:51.147545  428523 main.go:141] libmachine: Using SSH client type: native
	I1030 19:28:51.147729  428523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.90 22 <nil> <nil>}
	I1030 19:28:51.147741  428523 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-831845 && echo "kubernetes-upgrade-831845" | sudo tee /etc/hostname
	I1030 19:28:51.264925  428523 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-831845
	
	I1030 19:28:51.264955  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:51.267811  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.268142  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.268166  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.268322  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:51.268556  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.268715  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.268858  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:51.268995  428523 main.go:141] libmachine: Using SSH client type: native
	I1030 19:28:51.269208  428523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.90 22 <nil> <nil>}
	I1030 19:28:51.269233  428523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-831845' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-831845/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-831845' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:28:51.385206  428523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:28:51.385236  428523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:28:51.385256  428523 buildroot.go:174] setting up certificates
	I1030 19:28:51.385266  428523 provision.go:84] configureAuth start
	I1030 19:28:51.385275  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetMachineName
	I1030 19:28:51.385546  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetIP
	I1030 19:28:51.388032  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.388390  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.388418  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.388580  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:51.390870  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.391199  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.391230  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.391412  428523 provision.go:143] copyHostCerts
	I1030 19:28:51.391493  428523 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:28:51.391508  428523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:28:51.391565  428523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:28:51.391652  428523 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:28:51.391662  428523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:28:51.391688  428523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:28:51.391753  428523 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:28:51.391767  428523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:28:51.391796  428523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:28:51.391875  428523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-831845 san=[127.0.0.1 192.168.50.90 kubernetes-upgrade-831845 localhost minikube]
	I1030 19:28:51.500924  428523 provision.go:177] copyRemoteCerts
	I1030 19:28:51.501004  428523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:28:51.501038  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:51.503581  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.503918  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.503951  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.504099  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:51.504281  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.504457  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:51.504642  428523 sshutil.go:53] new ssh client: &{IP:192.168.50.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa Username:docker}
	I1030 19:28:51.584577  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:28:51.608347  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1030 19:28:51.631008  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:28:51.653708  428523 provision.go:87] duration metric: took 268.427563ms to configureAuth
	I1030 19:28:51.653735  428523 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:28:51.653895  428523 config.go:182] Loaded profile config "kubernetes-upgrade-831845": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:28:51.653967  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:51.656637  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.656965  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.657022  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.657135  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:51.657340  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.657503  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.657663  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:51.657824  428523 main.go:141] libmachine: Using SSH client type: native
	I1030 19:28:51.658074  428523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.90 22 <nil> <nil>}
	I1030 19:28:51.658096  428523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:28:51.880828  428523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:28:51.880855  428523 main.go:141] libmachine: Checking connection to Docker...
	I1030 19:28:51.880867  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetURL
	I1030 19:28:51.882104  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | Using libvirt version 6000000
	I1030 19:28:51.884425  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.884767  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.884800  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.884899  428523 main.go:141] libmachine: Docker is up and running!
	I1030 19:28:51.884914  428523 main.go:141] libmachine: Reticulating splines...
	I1030 19:28:51.884923  428523 client.go:171] duration metric: took 27.386390135s to LocalClient.Create
	I1030 19:28:51.884950  428523 start.go:167] duration metric: took 27.386480024s to libmachine.API.Create "kubernetes-upgrade-831845"
	I1030 19:28:51.884960  428523 start.go:293] postStartSetup for "kubernetes-upgrade-831845" (driver="kvm2")
	I1030 19:28:51.884972  428523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:28:51.884998  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:51.885240  428523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:28:51.885270  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:51.887569  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.887880  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:51.887904  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:51.888093  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:51.888274  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:51.888452  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:51.888584  428523 sshutil.go:53] new ssh client: &{IP:192.168.50.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa Username:docker}
	I1030 19:28:51.969009  428523 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:28:51.973247  428523 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:28:51.973274  428523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:28:51.973354  428523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:28:51.973454  428523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:28:51.973588  428523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:28:51.982714  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:28:52.006211  428523 start.go:296] duration metric: took 121.234319ms for postStartSetup
	I1030 19:28:52.006313  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetConfigRaw
	I1030 19:28:52.006942  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetIP
	I1030 19:28:52.009792  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.010159  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:52.010191  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.010416  428523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/config.json ...
	I1030 19:28:52.010628  428523 start.go:128] duration metric: took 27.537817956s to createHost
	I1030 19:28:52.010652  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:52.012710  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.013002  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:52.013031  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.013194  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:52.013402  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:52.013579  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:52.013725  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:52.013920  428523 main.go:141] libmachine: Using SSH client type: native
	I1030 19:28:52.014111  428523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.90 22 <nil> <nil>}
	I1030 19:28:52.014122  428523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:28:52.119362  428523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730316532.099779484
	
	I1030 19:28:52.119385  428523 fix.go:216] guest clock: 1730316532.099779484
	I1030 19:28:52.119394  428523 fix.go:229] Guest: 2024-10-30 19:28:52.099779484 +0000 UTC Remote: 2024-10-30 19:28:52.010641466 +0000 UTC m=+48.575897702 (delta=89.138018ms)
	I1030 19:28:52.119423  428523 fix.go:200] guest clock delta is within tolerance: 89.138018ms
	I1030 19:28:52.119430  428523 start.go:83] releasing machines lock for "kubernetes-upgrade-831845", held for 27.646790555s
	I1030 19:28:52.119460  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:52.119792  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetIP
	I1030 19:28:52.122509  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.122845  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:52.122880  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.123021  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:52.123600  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:52.123788  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:28:52.123907  428523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:28:52.123948  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:52.124025  428523 ssh_runner.go:195] Run: cat /version.json
	I1030 19:28:52.124053  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:28:52.126867  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.127031  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.127310  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:52.127337  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.127417  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:52.127439  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:52.127604  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:52.127814  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:52.127816  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:28:52.128001  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:28:52.128006  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:52.128224  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:28:52.128225  428523 sshutil.go:53] new ssh client: &{IP:192.168.50.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa Username:docker}
	I1030 19:28:52.128366  428523 sshutil.go:53] new ssh client: &{IP:192.168.50.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa Username:docker}
	I1030 19:28:52.235331  428523 ssh_runner.go:195] Run: systemctl --version
	I1030 19:28:52.241629  428523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:28:52.403915  428523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:28:52.410363  428523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:28:52.410456  428523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:28:52.426758  428523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:28:52.426787  428523 start.go:495] detecting cgroup driver to use...
	I1030 19:28:52.426859  428523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:28:52.444537  428523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:28:52.458372  428523 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:28:52.458468  428523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:28:52.472432  428523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:28:52.486153  428523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:28:52.601928  428523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:28:52.756725  428523 docker.go:233] disabling docker service ...
	I1030 19:28:52.756815  428523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:28:52.780373  428523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:28:52.798814  428523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:28:52.943071  428523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:28:53.085601  428523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:28:53.103084  428523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:28:53.123762  428523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:28:53.123826  428523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:28:53.134725  428523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:28:53.134820  428523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:28:53.145470  428523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:28:53.158011  428523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:28:53.169184  428523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:28:53.179940  428523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:28:53.189727  428523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:28:53.189779  428523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:28:53.206218  428523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:28:53.217276  428523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:28:53.343223  428523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:28:53.438659  428523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:28:53.438760  428523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:28:53.443887  428523 start.go:563] Will wait 60s for crictl version
	I1030 19:28:53.443947  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:53.448153  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:28:53.489544  428523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:28:53.489700  428523 ssh_runner.go:195] Run: crio --version
	I1030 19:28:53.518065  428523 ssh_runner.go:195] Run: crio --version
	I1030 19:28:53.551167  428523 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:28:53.552676  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetIP
	I1030 19:28:53.555833  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:53.556214  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:28:39 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:28:53.556248  428523 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:28:53.556444  428523 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:28:53.560793  428523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:28:53.573794  428523 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-831845 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:28:53.573894  428523 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:28:53.573949  428523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:28:53.609186  428523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:28:53.609257  428523 ssh_runner.go:195] Run: which lz4
	I1030 19:28:53.613403  428523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:28:53.617647  428523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:28:53.617678  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:28:55.266657  428523 crio.go:462] duration metric: took 1.653292066s to copy over tarball
	I1030 19:28:55.266771  428523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:28:57.875070  428523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.608261639s)
	I1030 19:28:57.875101  428523 crio.go:469] duration metric: took 2.608396045s to extract the tarball
	I1030 19:28:57.875111  428523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:28:57.920467  428523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:28:57.968650  428523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:28:57.968682  428523 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:28:57.968757  428523 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:28:57.968778  428523 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:28:57.968808  428523 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:28:57.968838  428523 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:28:57.968836  428523 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:28:57.968873  428523 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:28:57.968880  428523 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:28:57.968870  428523 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:28:57.970265  428523 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:28:57.970292  428523 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:28:57.970273  428523 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:28:57.970446  428523 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:28:57.970542  428523 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:28:57.970588  428523 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:28:57.970588  428523 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:28:57.970674  428523 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:28:58.165261  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:28:58.169644  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:28:58.175936  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:28:58.191500  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:28:58.237283  428523 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:28:58.237326  428523 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:28:58.237375  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:58.240802  428523 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:28:58.240846  428523 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:28:58.240889  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:58.263165  428523 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:28:58.263219  428523 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:28:58.263269  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:58.276896  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:28:58.276936  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:28:58.276936  428523 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:28:58.276965  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:28:58.276975  428523 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:28:58.277011  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:58.301880  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:28:58.313586  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:28:58.319265  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:28:58.372437  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:28:58.372467  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:28:58.372544  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:28:58.372569  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:28:58.416344  428523 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:28:58.416396  428523 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:28:58.416455  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:58.487516  428523 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:28:58.487570  428523 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:28:58.487625  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:58.504886  428523 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:28:58.504937  428523 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:28:58.504978  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:28:58.504990  428523 ssh_runner.go:195] Run: which crictl
	I1030 19:28:58.510920  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:28:58.510971  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:28:58.511053  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:28:58.511081  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:28:58.511162  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:28:58.517231  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:28:58.643727  428523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:28:58.643813  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:28:58.643907  428523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:28:58.643929  428523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:28:58.657274  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:28:58.657295  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:28:58.657389  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:28:58.725886  428523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:28:58.740665  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:28:58.740699  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:28:58.752106  428523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:28:58.808513  428523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:28:58.808939  428523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:28:58.821976  428523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:29:00.217524  428523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:29:00.366778  428523 cache_images.go:92] duration metric: took 2.398056267s to LoadCachedImages
	W1030 19:29:00.366916  428523 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:29:00.366940  428523 kubeadm.go:934] updating node { 192.168.50.90 8443 v1.20.0 crio true true} ...
	I1030 19:29:00.367051  428523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-831845 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:29:00.367134  428523 ssh_runner.go:195] Run: crio config
	I1030 19:29:00.424509  428523 cni.go:84] Creating CNI manager for ""
	I1030 19:29:00.424532  428523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:29:00.424546  428523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:29:00.424567  428523 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-831845 NodeName:kubernetes-upgrade-831845 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:29:00.424747  428523 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-831845"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:29:00.424819  428523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:29:00.435513  428523 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:29:00.435582  428523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:29:00.445333  428523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1030 19:29:00.464967  428523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:29:00.481888  428523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:29:00.499168  428523 ssh_runner.go:195] Run: grep 192.168.50.90	control-plane.minikube.internal$ /etc/hosts
	I1030 19:29:00.503211  428523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:29:00.515655  428523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:29:00.648911  428523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:29:00.667601  428523 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845 for IP: 192.168.50.90
	I1030 19:29:00.667629  428523 certs.go:194] generating shared ca certs ...
	I1030 19:29:00.667652  428523 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:29:00.667834  428523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:29:00.667884  428523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:29:00.667898  428523 certs.go:256] generating profile certs ...
	I1030 19:29:00.667967  428523 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/client.key
	I1030 19:29:00.667985  428523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/client.crt with IP's: []
	I1030 19:29:00.889213  428523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/client.crt ...
	I1030 19:29:00.889247  428523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/client.crt: {Name:mkc713f673a9b5057a566c98c1a1f853f77c484c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:29:00.889420  428523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/client.key ...
	I1030 19:29:00.889434  428523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/client.key: {Name:mk95d2044a9201a18fb6f3e6df2524d376b9689b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:29:00.889528  428523 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key.b0bb2018
	I1030 19:29:00.889554  428523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.crt.b0bb2018 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.90]
	I1030 19:29:01.042886  428523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.crt.b0bb2018 ...
	I1030 19:29:01.042920  428523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.crt.b0bb2018: {Name:mke2c126ca325aa2c029a6beac31dbd62879909d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:29:01.043116  428523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key.b0bb2018 ...
	I1030 19:29:01.043134  428523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key.b0bb2018: {Name:mk186c5da6b2d06f2dd167f513b46ca597fca28b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:29:01.043230  428523 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.crt.b0bb2018 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.crt
	I1030 19:29:01.043308  428523 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key.b0bb2018 -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key
	I1030 19:29:01.043367  428523 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.key
	I1030 19:29:01.043385  428523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.crt with IP's: []
	I1030 19:29:01.201403  428523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.crt ...
	I1030 19:29:01.201442  428523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.crt: {Name:mk606e613eda0f95f3d0db9b917989fd3b493f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:29:01.201616  428523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.key ...
	I1030 19:29:01.201628  428523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.key: {Name:mk929ae137928233702295d1bca1e3b20dab9b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:29:01.201799  428523 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:29:01.201836  428523 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:29:01.201847  428523 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:29:01.201871  428523 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:29:01.201894  428523 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:29:01.201914  428523 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:29:01.201951  428523 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:29:01.204824  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:29:01.232225  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:29:01.257235  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:29:01.280767  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:29:01.304197  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1030 19:29:01.327736  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:29:01.358719  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:29:01.383574  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:29:01.408052  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:29:01.435345  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:29:01.459417  428523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:29:01.486413  428523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:29:01.513865  428523 ssh_runner.go:195] Run: openssl version
	I1030 19:29:01.523547  428523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:29:01.538918  428523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:29:01.543989  428523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:29:01.544061  428523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:29:01.554300  428523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:29:01.567672  428523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:29:01.578965  428523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:29:01.583863  428523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:29:01.583922  428523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:29:01.591747  428523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:29:01.606540  428523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:29:01.621294  428523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:29:01.625791  428523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:29:01.625861  428523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:29:01.631648  428523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:29:01.642833  428523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:29:01.647705  428523 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 19:29:01.647773  428523 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-831845 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:29:01.647884  428523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:29:01.647947  428523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:29:01.690897  428523 cri.go:89] found id: ""
	I1030 19:29:01.690989  428523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:29:01.703020  428523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:29:01.713550  428523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:29:01.724354  428523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:29:01.724375  428523 kubeadm.go:157] found existing configuration files:
	
	I1030 19:29:01.724429  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:29:01.733847  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:29:01.733910  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:29:01.744115  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:29:01.753913  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:29:01.753983  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:29:01.764054  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:29:01.773415  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:29:01.773480  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:29:01.783419  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:29:01.793567  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:29:01.793652  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:29:01.803743  428523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:29:01.940412  428523 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:29:01.940658  428523 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:29:02.092120  428523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:29:02.092264  428523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:29:02.092395  428523 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:29:02.287018  428523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:29:02.289027  428523 out.go:235]   - Generating certificates and keys ...
	I1030 19:29:02.289160  428523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:29:02.289250  428523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:29:02.484045  428523 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 19:29:02.702156  428523 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 19:29:02.815012  428523 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 19:29:02.960180  428523 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 19:29:03.129082  428523 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 19:29:03.129307  428523 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-831845 localhost] and IPs [192.168.50.90 127.0.0.1 ::1]
	I1030 19:29:03.191792  428523 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 19:29:03.192011  428523 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-831845 localhost] and IPs [192.168.50.90 127.0.0.1 ::1]
	I1030 19:29:03.761158  428523 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 19:29:03.942430  428523 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 19:29:04.224534  428523 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 19:29:04.225403  428523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:29:04.390665  428523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:29:04.601072  428523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:29:04.753288  428523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:29:05.011385  428523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:29:05.027191  428523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:29:05.028102  428523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:29:05.028151  428523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:29:05.146766  428523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:29:05.148816  428523 out.go:235]   - Booting up control plane ...
	I1030 19:29:05.148936  428523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:29:05.156327  428523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:29:05.157813  428523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:29:05.159159  428523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:29:05.165901  428523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:29:45.162809  428523 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:29:45.163329  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:29:45.163531  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:29:50.163510  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:29:50.163725  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:30:00.163074  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:30:00.163347  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:30:20.162996  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:30:20.163237  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:31:00.164989  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:31:00.165272  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:31:00.165446  428523 kubeadm.go:310] 
	I1030 19:31:00.165516  428523 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:31:00.165614  428523 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:31:00.165631  428523 kubeadm.go:310] 
	I1030 19:31:00.165674  428523 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:31:00.165732  428523 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:31:00.165877  428523 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:31:00.165886  428523 kubeadm.go:310] 
	I1030 19:31:00.166020  428523 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:31:00.166063  428523 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:31:00.166106  428523 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:31:00.166113  428523 kubeadm.go:310] 
	I1030 19:31:00.166263  428523 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:31:00.166380  428523 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:31:00.166391  428523 kubeadm.go:310] 
	I1030 19:31:00.166538  428523 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:31:00.166651  428523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:31:00.166743  428523 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:31:00.166836  428523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:31:00.166843  428523 kubeadm.go:310] 
	I1030 19:31:00.167674  428523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:31:00.167807  428523 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:31:00.167911  428523 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1030 19:31:00.168102  428523 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-831845 localhost] and IPs [192.168.50.90 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-831845 localhost] and IPs [192.168.50.90 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-831845 localhost] and IPs [192.168.50.90 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-831845 localhost] and IPs [192.168.50.90 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:31:00.168166  428523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:31:00.670191  428523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:31:00.690135  428523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:31:00.703159  428523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:31:00.703188  428523 kubeadm.go:157] found existing configuration files:
	
	I1030 19:31:00.703245  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:31:00.715371  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:31:00.715440  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:31:00.727683  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:31:00.739475  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:31:00.739539  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:31:00.751605  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:31:00.763577  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:31:00.763641  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:31:00.775870  428523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:31:00.787591  428523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:31:00.787663  428523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:31:00.800318  428523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:31:00.904541  428523 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:31:00.904626  428523 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:31:01.110664  428523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:31:01.110812  428523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:31:01.110939  428523 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:31:01.345821  428523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:31:01.350932  428523 out.go:235]   - Generating certificates and keys ...
	I1030 19:31:01.351097  428523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:31:01.351227  428523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:31:01.351395  428523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:31:01.351520  428523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:31:01.351672  428523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:31:01.351775  428523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:31:01.351854  428523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:31:01.351929  428523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:31:01.352017  428523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:31:01.352113  428523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:31:01.352146  428523 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:31:01.352192  428523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:31:01.497032  428523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:31:01.679499  428523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:31:01.823849  428523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:31:02.083577  428523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:31:02.108312  428523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:31:02.109639  428523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:31:02.109722  428523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:31:02.271953  428523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:31:02.274119  428523 out.go:235]   - Booting up control plane ...
	I1030 19:31:02.274327  428523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:31:02.280004  428523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:31:02.280936  428523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:31:02.281713  428523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:31:02.285324  428523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:31:42.288731  428523 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:31:42.289197  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:31:42.289484  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:31:47.290566  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:31:47.290844  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:31:57.290964  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:31:57.291308  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:32:17.290381  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:32:17.290596  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:32:57.291597  428523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:32:57.291879  428523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:32:57.291899  428523 kubeadm.go:310] 
	I1030 19:32:57.291955  428523 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:32:57.292044  428523 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:32:57.292084  428523 kubeadm.go:310] 
	I1030 19:32:57.292130  428523 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:32:57.292177  428523 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:32:57.292309  428523 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:32:57.292326  428523 kubeadm.go:310] 
	I1030 19:32:57.292481  428523 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:32:57.292530  428523 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:32:57.292577  428523 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:32:57.292587  428523 kubeadm.go:310] 
	I1030 19:32:57.292733  428523 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:32:57.292894  428523 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:32:57.292919  428523 kubeadm.go:310] 
	I1030 19:32:57.293097  428523 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:32:57.293233  428523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:32:57.293353  428523 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:32:57.293446  428523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:32:57.293459  428523 kubeadm.go:310] 
	I1030 19:32:57.293903  428523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:32:57.294029  428523 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:32:57.294113  428523 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:32:57.294202  428523 kubeadm.go:394] duration metric: took 3m55.646435652s to StartCluster
	I1030 19:32:57.294264  428523 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:32:57.294354  428523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:32:57.360684  428523 cri.go:89] found id: ""
	I1030 19:32:57.360717  428523 logs.go:282] 0 containers: []
	W1030 19:32:57.360728  428523 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:32:57.360736  428523 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:32:57.360801  428523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:32:57.402771  428523 cri.go:89] found id: ""
	I1030 19:32:57.402823  428523 logs.go:282] 0 containers: []
	W1030 19:32:57.402835  428523 logs.go:284] No container was found matching "etcd"
	I1030 19:32:57.402844  428523 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:32:57.402918  428523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:32:57.441914  428523 cri.go:89] found id: ""
	I1030 19:32:57.441947  428523 logs.go:282] 0 containers: []
	W1030 19:32:57.441962  428523 logs.go:284] No container was found matching "coredns"
	I1030 19:32:57.441975  428523 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:32:57.442044  428523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:32:57.496150  428523 cri.go:89] found id: ""
	I1030 19:32:57.496178  428523 logs.go:282] 0 containers: []
	W1030 19:32:57.496188  428523 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:32:57.496196  428523 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:32:57.496250  428523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:32:57.539174  428523 cri.go:89] found id: ""
	I1030 19:32:57.539216  428523 logs.go:282] 0 containers: []
	W1030 19:32:57.539229  428523 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:32:57.539248  428523 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:32:57.539327  428523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:32:57.578600  428523 cri.go:89] found id: ""
	I1030 19:32:57.578626  428523 logs.go:282] 0 containers: []
	W1030 19:32:57.578641  428523 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:32:57.578650  428523 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:32:57.578714  428523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:32:57.614906  428523 cri.go:89] found id: ""
	I1030 19:32:57.614939  428523 logs.go:282] 0 containers: []
	W1030 19:32:57.614952  428523 logs.go:284] No container was found matching "kindnet"
	I1030 19:32:57.614968  428523 logs.go:123] Gathering logs for kubelet ...
	I1030 19:32:57.614983  428523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:32:57.673383  428523 logs.go:123] Gathering logs for dmesg ...
	I1030 19:32:57.673424  428523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:32:57.689935  428523 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:32:57.689969  428523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:32:57.888019  428523 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:32:57.888040  428523 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:32:57.888060  428523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:32:58.002859  428523 logs.go:123] Gathering logs for container status ...
	I1030 19:32:58.002899  428523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1030 19:32:58.052203  428523 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:32:58.052281  428523 out.go:270] * 
	* 
	W1030 19:32:58.052351  428523 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:32:58.052366  428523 out.go:270] * 
	* 
	W1030 19:32:58.053610  428523 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:32:58.163391  428523 out.go:201] 
	W1030 19:32:58.225976  428523 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:32:58.226039  428523 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:32:58.226068  428523 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:32:58.307677  428523 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-831845
E1030 19:33:00.311733  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-831845: (2.549384267s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-831845 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-831845 status --format={{.Host}}: exit status 7 (75.717937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.68577035s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-831845 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.873606ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-831845] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-831845
	    minikube start -p kubernetes-upgrade-831845 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8318452 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-831845 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-831845 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.764947488s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-30 19:35:11.670971092 +0000 UTC m=+4468.418154629
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-831845 -n kubernetes-upgrade-831845
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-831845 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-831845 logs -n 25: (1.905127609s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | ip r s                                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | iptables -t nat -L -n -v                             |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | cat /run/flannel/subnet.env                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo cat                    | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo cat                    | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo cat                    | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248                             | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-534248 sudo                        | custom-flannel-534248 | jenkins | v1.34.0 | 30 Oct 24 19:35 UTC | 30 Oct 24 19:35 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:34:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:34:29.458144  437692 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:34:29.458318  437692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:34:29.458330  437692 out.go:358] Setting ErrFile to fd 2...
	I1030 19:34:29.458337  437692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:34:29.458615  437692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:34:29.459452  437692 out.go:352] Setting JSON to false
	I1030 19:34:29.461092  437692 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11812,"bootTime":1730305057,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:34:29.461248  437692 start.go:139] virtualization: kvm guest
	I1030 19:34:29.464153  437692 out.go:177] * [flannel-534248] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:34:29.465796  437692 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:34:29.465791  437692 notify.go:220] Checking for updates...
	I1030 19:34:29.467296  437692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:34:29.468741  437692 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:34:29.470040  437692 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:34:29.471365  437692 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:34:29.472728  437692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:34:29.474778  437692 config.go:182] Loaded profile config "custom-flannel-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:34:29.474935  437692 config.go:182] Loaded profile config "enable-default-cni-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:34:29.475123  437692 config.go:182] Loaded profile config "kubernetes-upgrade-831845": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:34:29.475249  437692 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:34:29.525097  437692 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 19:34:29.526463  437692 start.go:297] selected driver: kvm2
	I1030 19:34:29.526479  437692 start.go:901] validating driver "kvm2" against <nil>
	I1030 19:34:29.526523  437692 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:34:29.527558  437692 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:34:29.527641  437692 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:34:29.544052  437692 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:34:29.544120  437692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 19:34:29.544447  437692 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:34:29.544489  437692 cni.go:84] Creating CNI manager for "flannel"
	I1030 19:34:29.544496  437692 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1030 19:34:29.544560  437692 start.go:340] cluster config:
	{Name:flannel-534248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-534248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:34:29.544703  437692 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:34:29.547148  437692 out.go:177] * Starting "flannel-534248" primary control-plane node in "flannel-534248" cluster
	I1030 19:34:29.548414  437692 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:34:29.548482  437692 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 19:34:29.548495  437692 cache.go:56] Caching tarball of preloaded images
	I1030 19:34:29.548588  437692 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:34:29.548601  437692 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 19:34:29.548718  437692 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/config.json ...
	I1030 19:34:29.548740  437692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/config.json: {Name:mkd1efb45d96fc22b87f3b9bfd54a4f1b2f49f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:34:29.548895  437692 start.go:360] acquireMachinesLock for flannel-534248: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:34:29.548930  437692 start.go:364] duration metric: took 19.78µs to acquireMachinesLock for "flannel-534248"
	I1030 19:34:29.548950  437692 start.go:93] Provisioning new machine with config: &{Name:flannel-534248 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:flannel-534248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:34:29.549030  437692 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 19:34:28.200241  434579 node_ready.go:53] node "custom-flannel-534248" has status "Ready":"False"
	I1030 19:34:29.554667  434579 node_ready.go:49] node "custom-flannel-534248" has status "Ready":"True"
	I1030 19:34:29.554689  434579 node_ready.go:38] duration metric: took 14.005045164s for node "custom-flannel-534248" to be "Ready" ...
	I1030 19:34:29.554703  434579 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:34:29.564775  434579 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-wn26s" in "kube-system" namespace to be "Ready" ...
	I1030 19:34:31.572287  434579 pod_ready.go:103] pod "coredns-7c65d6cfc9-wn26s" in "kube-system" namespace has status "Ready":"False"
	I1030 19:34:29.024687  435835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:34:29.024719  435835 machine.go:96] duration metric: took 8.954000962s to provisionDockerMachine
	I1030 19:34:29.024734  435835 start.go:293] postStartSetup for "kubernetes-upgrade-831845" (driver="kvm2")
	I1030 19:34:29.024748  435835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:34:29.024768  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:34:29.025081  435835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:34:29.025105  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:34:29.059657  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.060138  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:33:13 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:34:29.060185  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.060403  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:34:29.060601  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:34:29.060804  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:34:29.060994  435835 sshutil.go:53] new ssh client: &{IP:192.168.50.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa Username:docker}
	I1030 19:34:29.149191  435835 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:34:29.154935  435835 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:34:29.154965  435835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:34:29.155048  435835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:34:29.155151  435835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:34:29.155281  435835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:34:29.165426  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:34:29.195699  435835 start.go:296] duration metric: took 170.946819ms for postStartSetup
	I1030 19:34:29.195752  435835 fix.go:56] duration metric: took 9.155039544s for fixHost
	I1030 19:34:29.195780  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:34:29.198853  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.199337  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:33:13 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:34:29.199368  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.199514  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:34:29.199693  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:34:29.199803  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:34:29.199929  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:34:29.200129  435835 main.go:141] libmachine: Using SSH client type: native
	I1030 19:34:29.200317  435835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.90 22 <nil> <nil>}
	I1030 19:34:29.200328  435835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:34:29.319473  435835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730316869.273530324
	
	I1030 19:34:29.319502  435835 fix.go:216] guest clock: 1730316869.273530324
	I1030 19:34:29.319513  435835 fix.go:229] Guest: 2024-10-30 19:34:29.273530324 +0000 UTC Remote: 2024-10-30 19:34:29.195757234 +0000 UTC m=+46.284434076 (delta=77.77309ms)
	I1030 19:34:29.319541  435835 fix.go:200] guest clock delta is within tolerance: 77.77309ms
	I1030 19:34:29.319547  435835 start.go:83] releasing machines lock for "kubernetes-upgrade-831845", held for 9.278866971s
	I1030 19:34:29.319570  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:34:29.319843  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetIP
	I1030 19:34:29.322568  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.322935  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:33:13 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:34:29.322973  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.323101  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:34:29.323755  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:34:29.323952  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .DriverName
	I1030 19:34:29.324075  435835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:34:29.324125  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:34:29.324130  435835 ssh_runner.go:195] Run: cat /version.json
	I1030 19:34:29.324146  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHHostname
	I1030 19:34:29.326897  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.327243  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.327309  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:33:13 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:34:29.327340  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.327464  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:34:29.327646  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:34:29.327705  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:33:13 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:34:29.327730  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:29.327793  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:34:29.327863  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHPort
	I1030 19:34:29.327940  435835 sshutil.go:53] new ssh client: &{IP:192.168.50.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa Username:docker}
	I1030 19:34:29.328022  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHKeyPath
	I1030 19:34:29.328130  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetSSHUsername
	I1030 19:34:29.328263  435835 sshutil.go:53] new ssh client: &{IP:192.168.50.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/kubernetes-upgrade-831845/id_rsa Username:docker}
	I1030 19:34:29.433763  435835 ssh_runner.go:195] Run: systemctl --version
	I1030 19:34:29.442631  435835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:34:29.619179  435835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:34:29.626885  435835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:34:29.626961  435835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:34:29.645058  435835 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 19:34:29.645091  435835 start.go:495] detecting cgroup driver to use...
	I1030 19:34:29.645165  435835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:34:29.676540  435835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:34:29.698762  435835 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:34:29.698827  435835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:34:29.716394  435835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:34:29.731367  435835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:34:29.884144  435835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:34:30.043738  435835 docker.go:233] disabling docker service ...
	I1030 19:34:30.043830  435835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:34:30.112503  435835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:34:30.179983  435835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:34:30.432144  435835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:34:30.693463  435835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:34:30.767959  435835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:34:30.853578  435835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:34:30.853653  435835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:34:30.940429  435835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:34:30.940521  435835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:34:30.994681  435835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:34:31.043736  435835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:34:31.099859  435835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:34:31.174553  435835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:34:31.274716  435835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:34:31.481744  435835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:34:31.607788  435835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:34:31.632962  435835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:34:31.662476  435835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:34:31.957442  435835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:34:32.730066  435835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:34:32.730174  435835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:34:32.735438  435835 start.go:563] Will wait 60s for crictl version
	I1030 19:34:32.735487  435835 ssh_runner.go:195] Run: which crictl
	I1030 19:34:32.740118  435835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:34:32.787566  435835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:34:32.787649  435835 ssh_runner.go:195] Run: crio --version
	I1030 19:34:32.825798  435835 ssh_runner.go:195] Run: crio --version
	I1030 19:34:32.866705  435835 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:34:32.868110  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) Calling .GetIP
	I1030 19:34:32.871395  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:32.871845  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0a:2e", ip: ""} in network mk-kubernetes-upgrade-831845: {Iface:virbr2 ExpiryTime:2024-10-30 20:33:13 +0000 UTC Type:0 Mac:52:54:00:52:0a:2e Iaid: IPaddr:192.168.50.90 Prefix:24 Hostname:kubernetes-upgrade-831845 Clientid:01:52:54:00:52:0a:2e}
	I1030 19:34:32.871876  435835 main.go:141] libmachine: (kubernetes-upgrade-831845) DBG | domain kubernetes-upgrade-831845 has defined IP address 192.168.50.90 and MAC address 52:54:00:52:0a:2e in network mk-kubernetes-upgrade-831845
	I1030 19:34:32.872144  435835 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:34:32.878007  435835 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-831845 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.90 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:34:32.878222  435835 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:34:32.878302  435835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:34:32.932236  435835 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:34:32.932269  435835 crio.go:433] Images already preloaded, skipping extraction
	I1030 19:34:32.932349  435835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:34:29.550476  437692 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 19:34:29.550678  437692 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:34:29.550736  437692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:34:29.566703  437692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1030 19:34:29.567259  437692 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:34:29.567838  437692 main.go:141] libmachine: Using API Version  1
	I1030 19:34:29.567865  437692 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:34:29.568271  437692 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:34:29.568510  437692 main.go:141] libmachine: (flannel-534248) Calling .GetMachineName
	I1030 19:34:29.568676  437692 main.go:141] libmachine: (flannel-534248) Calling .DriverName
	I1030 19:34:29.568842  437692 start.go:159] libmachine.API.Create for "flannel-534248" (driver="kvm2")
	I1030 19:34:29.568869  437692 client.go:168] LocalClient.Create starting
	I1030 19:34:29.568914  437692 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 19:34:29.568956  437692 main.go:141] libmachine: Decoding PEM data...
	I1030 19:34:29.568980  437692 main.go:141] libmachine: Parsing certificate...
	I1030 19:34:29.569051  437692 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 19:34:29.569087  437692 main.go:141] libmachine: Decoding PEM data...
	I1030 19:34:29.569107  437692 main.go:141] libmachine: Parsing certificate...
	I1030 19:34:29.569131  437692 main.go:141] libmachine: Running pre-create checks...
	I1030 19:34:29.569142  437692 main.go:141] libmachine: (flannel-534248) Calling .PreCreateCheck
	I1030 19:34:29.569564  437692 main.go:141] libmachine: (flannel-534248) Calling .GetConfigRaw
	I1030 19:34:29.570013  437692 main.go:141] libmachine: Creating machine...
	I1030 19:34:29.570030  437692 main.go:141] libmachine: (flannel-534248) Calling .Create
	I1030 19:34:29.570164  437692 main.go:141] libmachine: (flannel-534248) Creating KVM machine...
	I1030 19:34:29.571364  437692 main.go:141] libmachine: (flannel-534248) DBG | found existing default KVM network
	I1030 19:34:29.572897  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:29.572726  437714 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:2c:29} reservation:<nil>}
	I1030 19:34:29.574280  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:29.574202  437714 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a6:48:09} reservation:<nil>}
	I1030 19:34:29.575833  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:29.575743  437714 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00038aad0}
	I1030 19:34:29.575883  437692 main.go:141] libmachine: (flannel-534248) DBG | created network xml: 
	I1030 19:34:29.575900  437692 main.go:141] libmachine: (flannel-534248) DBG | <network>
	I1030 19:34:29.575913  437692 main.go:141] libmachine: (flannel-534248) DBG |   <name>mk-flannel-534248</name>
	I1030 19:34:29.575924  437692 main.go:141] libmachine: (flannel-534248) DBG |   <dns enable='no'/>
	I1030 19:34:29.575967  437692 main.go:141] libmachine: (flannel-534248) DBG |   
	I1030 19:34:29.575981  437692 main.go:141] libmachine: (flannel-534248) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1030 19:34:29.575989  437692 main.go:141] libmachine: (flannel-534248) DBG |     <dhcp>
	I1030 19:34:29.575997  437692 main.go:141] libmachine: (flannel-534248) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1030 19:34:29.576005  437692 main.go:141] libmachine: (flannel-534248) DBG |     </dhcp>
	I1030 19:34:29.576011  437692 main.go:141] libmachine: (flannel-534248) DBG |   </ip>
	I1030 19:34:29.576018  437692 main.go:141] libmachine: (flannel-534248) DBG |   
	I1030 19:34:29.576024  437692 main.go:141] libmachine: (flannel-534248) DBG | </network>
	I1030 19:34:29.576033  437692 main.go:141] libmachine: (flannel-534248) DBG | 
	I1030 19:34:29.581149  437692 main.go:141] libmachine: (flannel-534248) DBG | trying to create private KVM network mk-flannel-534248 192.168.61.0/24...
	I1030 19:34:29.681640  437692 main.go:141] libmachine: (flannel-534248) DBG | private KVM network mk-flannel-534248 192.168.61.0/24 created
	I1030 19:34:29.681770  437692 main.go:141] libmachine: (flannel-534248) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248 ...
	I1030 19:34:29.681848  437692 main.go:141] libmachine: (flannel-534248) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 19:34:29.681943  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:29.681887  437714 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:34:29.682108  437692 main.go:141] libmachine: (flannel-534248) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 19:34:29.982400  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:29.982245  437714 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248/id_rsa...
	I1030 19:34:30.044397  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:30.044295  437714 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248/flannel-534248.rawdisk...
	I1030 19:34:30.044433  437692 main.go:141] libmachine: (flannel-534248) DBG | Writing magic tar header
	I1030 19:34:30.044452  437692 main.go:141] libmachine: (flannel-534248) DBG | Writing SSH key tar header
	I1030 19:34:30.044531  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:30.044448  437714 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248 ...
	I1030 19:34:30.044585  437692 main.go:141] libmachine: (flannel-534248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248
	I1030 19:34:30.044614  437692 main.go:141] libmachine: (flannel-534248) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248 (perms=drwx------)
	I1030 19:34:30.044635  437692 main.go:141] libmachine: (flannel-534248) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 19:34:30.044664  437692 main.go:141] libmachine: (flannel-534248) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 19:34:30.044685  437692 main.go:141] libmachine: (flannel-534248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 19:34:30.044698  437692 main.go:141] libmachine: (flannel-534248) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 19:34:30.044711  437692 main.go:141] libmachine: (flannel-534248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:34:30.044725  437692 main.go:141] libmachine: (flannel-534248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 19:34:30.044736  437692 main.go:141] libmachine: (flannel-534248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 19:34:30.044759  437692 main.go:141] libmachine: (flannel-534248) DBG | Checking permissions on dir: /home/jenkins
	I1030 19:34:30.044772  437692 main.go:141] libmachine: (flannel-534248) DBG | Checking permissions on dir: /home
	I1030 19:34:30.044784  437692 main.go:141] libmachine: (flannel-534248) DBG | Skipping /home - not owner
	I1030 19:34:30.044794  437692 main.go:141] libmachine: (flannel-534248) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 19:34:30.044807  437692 main.go:141] libmachine: (flannel-534248) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 19:34:30.044819  437692 main.go:141] libmachine: (flannel-534248) Creating domain...
	I1030 19:34:30.046060  437692 main.go:141] libmachine: (flannel-534248) define libvirt domain using xml: 
	I1030 19:34:30.046081  437692 main.go:141] libmachine: (flannel-534248) <domain type='kvm'>
	I1030 19:34:30.046091  437692 main.go:141] libmachine: (flannel-534248)   <name>flannel-534248</name>
	I1030 19:34:30.046099  437692 main.go:141] libmachine: (flannel-534248)   <memory unit='MiB'>3072</memory>
	I1030 19:34:30.046112  437692 main.go:141] libmachine: (flannel-534248)   <vcpu>2</vcpu>
	I1030 19:34:30.046121  437692 main.go:141] libmachine: (flannel-534248)   <features>
	I1030 19:34:30.046130  437692 main.go:141] libmachine: (flannel-534248)     <acpi/>
	I1030 19:34:30.046141  437692 main.go:141] libmachine: (flannel-534248)     <apic/>
	I1030 19:34:30.046153  437692 main.go:141] libmachine: (flannel-534248)     <pae/>
	I1030 19:34:30.046178  437692 main.go:141] libmachine: (flannel-534248)     
	I1030 19:34:30.046190  437692 main.go:141] libmachine: (flannel-534248)   </features>
	I1030 19:34:30.046201  437692 main.go:141] libmachine: (flannel-534248)   <cpu mode='host-passthrough'>
	I1030 19:34:30.046210  437692 main.go:141] libmachine: (flannel-534248)   
	I1030 19:34:30.046220  437692 main.go:141] libmachine: (flannel-534248)   </cpu>
	I1030 19:34:30.046229  437692 main.go:141] libmachine: (flannel-534248)   <os>
	I1030 19:34:30.046239  437692 main.go:141] libmachine: (flannel-534248)     <type>hvm</type>
	I1030 19:34:30.046273  437692 main.go:141] libmachine: (flannel-534248)     <boot dev='cdrom'/>
	I1030 19:34:30.046294  437692 main.go:141] libmachine: (flannel-534248)     <boot dev='hd'/>
	I1030 19:34:30.046305  437692 main.go:141] libmachine: (flannel-534248)     <bootmenu enable='no'/>
	I1030 19:34:30.046315  437692 main.go:141] libmachine: (flannel-534248)   </os>
	I1030 19:34:30.046326  437692 main.go:141] libmachine: (flannel-534248)   <devices>
	I1030 19:34:30.046337  437692 main.go:141] libmachine: (flannel-534248)     <disk type='file' device='cdrom'>
	I1030 19:34:30.046353  437692 main.go:141] libmachine: (flannel-534248)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248/boot2docker.iso'/>
	I1030 19:34:30.046363  437692 main.go:141] libmachine: (flannel-534248)       <target dev='hdc' bus='scsi'/>
	I1030 19:34:30.046384  437692 main.go:141] libmachine: (flannel-534248)       <readonly/>
	I1030 19:34:30.046392  437692 main.go:141] libmachine: (flannel-534248)     </disk>
	I1030 19:34:30.046401  437692 main.go:141] libmachine: (flannel-534248)     <disk type='file' device='disk'>
	I1030 19:34:30.046412  437692 main.go:141] libmachine: (flannel-534248)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 19:34:30.046424  437692 main.go:141] libmachine: (flannel-534248)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/flannel-534248/flannel-534248.rawdisk'/>
	I1030 19:34:30.046437  437692 main.go:141] libmachine: (flannel-534248)       <target dev='hda' bus='virtio'/>
	I1030 19:34:30.046447  437692 main.go:141] libmachine: (flannel-534248)     </disk>
	I1030 19:34:30.046456  437692 main.go:141] libmachine: (flannel-534248)     <interface type='network'>
	I1030 19:34:30.046464  437692 main.go:141] libmachine: (flannel-534248)       <source network='mk-flannel-534248'/>
	I1030 19:34:30.046481  437692 main.go:141] libmachine: (flannel-534248)       <model type='virtio'/>
	I1030 19:34:30.046514  437692 main.go:141] libmachine: (flannel-534248)     </interface>
	I1030 19:34:30.046522  437692 main.go:141] libmachine: (flannel-534248)     <interface type='network'>
	I1030 19:34:30.046531  437692 main.go:141] libmachine: (flannel-534248)       <source network='default'/>
	I1030 19:34:30.046541  437692 main.go:141] libmachine: (flannel-534248)       <model type='virtio'/>
	I1030 19:34:30.046548  437692 main.go:141] libmachine: (flannel-534248)     </interface>
	I1030 19:34:30.046558  437692 main.go:141] libmachine: (flannel-534248)     <serial type='pty'>
	I1030 19:34:30.046565  437692 main.go:141] libmachine: (flannel-534248)       <target port='0'/>
	I1030 19:34:30.046574  437692 main.go:141] libmachine: (flannel-534248)     </serial>
	I1030 19:34:30.046605  437692 main.go:141] libmachine: (flannel-534248)     <console type='pty'>
	I1030 19:34:30.046638  437692 main.go:141] libmachine: (flannel-534248)       <target type='serial' port='0'/>
	I1030 19:34:30.046650  437692 main.go:141] libmachine: (flannel-534248)     </console>
	I1030 19:34:30.046660  437692 main.go:141] libmachine: (flannel-534248)     <rng model='virtio'>
	I1030 19:34:30.046671  437692 main.go:141] libmachine: (flannel-534248)       <backend model='random'>/dev/random</backend>
	I1030 19:34:30.046680  437692 main.go:141] libmachine: (flannel-534248)     </rng>
	I1030 19:34:30.046687  437692 main.go:141] libmachine: (flannel-534248)     
	I1030 19:34:30.046693  437692 main.go:141] libmachine: (flannel-534248)     
	I1030 19:34:30.046703  437692 main.go:141] libmachine: (flannel-534248)   </devices>
	I1030 19:34:30.046711  437692 main.go:141] libmachine: (flannel-534248) </domain>
	I1030 19:34:30.046721  437692 main.go:141] libmachine: (flannel-534248) 
	I1030 19:34:30.052721  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:98:65:94 in network default
	I1030 19:34:30.054191  437692 main.go:141] libmachine: (flannel-534248) Ensuring networks are active...
	I1030 19:34:30.054215  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:30.054961  437692 main.go:141] libmachine: (flannel-534248) Ensuring network default is active
	I1030 19:34:30.055373  437692 main.go:141] libmachine: (flannel-534248) Ensuring network mk-flannel-534248 is active
	I1030 19:34:30.056078  437692 main.go:141] libmachine: (flannel-534248) Getting domain xml...
	I1030 19:34:30.056982  437692 main.go:141] libmachine: (flannel-534248) Creating domain...
	I1030 19:34:31.465766  437692 main.go:141] libmachine: (flannel-534248) Waiting to get IP...
	I1030 19:34:31.466967  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:31.467731  437692 main.go:141] libmachine: (flannel-534248) DBG | unable to find current IP address of domain flannel-534248 in network mk-flannel-534248
	I1030 19:34:31.467788  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:31.467709  437714 retry.go:31] will retry after 259.350607ms: waiting for machine to come up
	I1030 19:34:31.729279  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:31.729887  437692 main.go:141] libmachine: (flannel-534248) DBG | unable to find current IP address of domain flannel-534248 in network mk-flannel-534248
	I1030 19:34:31.729920  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:31.729846  437714 retry.go:31] will retry after 316.265512ms: waiting for machine to come up
	I1030 19:34:32.047802  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:32.048297  437692 main.go:141] libmachine: (flannel-534248) DBG | unable to find current IP address of domain flannel-534248 in network mk-flannel-534248
	I1030 19:34:32.048328  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:32.048251  437714 retry.go:31] will retry after 329.510598ms: waiting for machine to come up
	I1030 19:34:32.380063  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:32.380669  437692 main.go:141] libmachine: (flannel-534248) DBG | unable to find current IP address of domain flannel-534248 in network mk-flannel-534248
	I1030 19:34:32.380698  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:32.380626  437714 retry.go:31] will retry after 579.541421ms: waiting for machine to come up
	I1030 19:34:32.961429  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:32.962088  437692 main.go:141] libmachine: (flannel-534248) DBG | unable to find current IP address of domain flannel-534248 in network mk-flannel-534248
	I1030 19:34:32.962121  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:32.962051  437714 retry.go:31] will retry after 681.913437ms: waiting for machine to come up
	I1030 19:34:33.646383  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:33.647078  437692 main.go:141] libmachine: (flannel-534248) DBG | unable to find current IP address of domain flannel-534248 in network mk-flannel-534248
	I1030 19:34:33.647112  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:33.647030  437714 retry.go:31] will retry after 634.053539ms: waiting for machine to come up
	I1030 19:34:34.282774  437692 main.go:141] libmachine: (flannel-534248) DBG | domain flannel-534248 has defined MAC address 52:54:00:87:30:c1 in network mk-flannel-534248
	I1030 19:34:34.283355  437692 main.go:141] libmachine: (flannel-534248) DBG | unable to find current IP address of domain flannel-534248 in network mk-flannel-534248
	I1030 19:34:34.283386  437692 main.go:141] libmachine: (flannel-534248) DBG | I1030 19:34:34.283320  437714 retry.go:31] will retry after 893.737048ms: waiting for machine to come up
	I1030 19:34:33.574209  434579 pod_ready.go:103] pod "coredns-7c65d6cfc9-wn26s" in "kube-system" namespace has status "Ready":"False"
	I1030 19:34:36.071701  434579 pod_ready.go:103] pod "coredns-7c65d6cfc9-wn26s" in "kube-system" namespace has status "Ready":"False"
	I1030 19:34:32.976649  435835 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:34:32.976679  435835 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:34:32.976690  435835 kubeadm.go:934] updating node { 192.168.50.90 8443 v1.31.2 crio true true} ...
	I1030 19:34:32.976811  435835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-831845 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:34:32.976883  435835 ssh_runner.go:195] Run: crio config
	I1030 19:34:33.038010  435835 cni.go:84] Creating CNI manager for ""
	I1030 19:34:33.038039  435835 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:34:33.038053  435835 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:34:33.038083  435835 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.90 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-831845 NodeName:kubernetes-upgrade-831845 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:34:33.038268  435835 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-831845"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:34:33.038345  435835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:34:33.051899  435835 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:34:33.051977  435835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:34:33.064075  435835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1030 19:34:33.087195  435835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:34:33.108393  435835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I1030 19:34:33.128868  435835 ssh_runner.go:195] Run: grep 192.168.50.90	control-plane.minikube.internal$ /etc/hosts
	I1030 19:34:33.134168  435835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:34:33.297615  435835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:34:33.313877  435835 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845 for IP: 192.168.50.90
	I1030 19:34:33.313902  435835 certs.go:194] generating shared ca certs ...
	I1030 19:34:33.313923  435835 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:34:33.314113  435835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:34:33.314167  435835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:34:33.314180  435835 certs.go:256] generating profile certs ...
	I1030 19:34:33.314317  435835 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/client.key
	I1030 19:34:33.314380  435835 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key.b0bb2018
	I1030 19:34:33.314424  435835 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.key
	I1030 19:34:33.314602  435835 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:34:33.314646  435835 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:34:33.314660  435835 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:34:33.314695  435835 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:34:33.314729  435835 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:34:33.314760  435835 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:34:33.314821  435835 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:34:33.315734  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:34:33.345022  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:34:33.375661  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:34:33.408578  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:34:33.436028  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1030 19:34:33.463976  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:34:33.494477  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:34:33.523815  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kubernetes-upgrade-831845/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:34:33.623193  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:34:33.930589  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:34:34.074370  435835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:34:34.186693  435835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:34:34.236155  435835 ssh_runner.go:195] Run: openssl version
	I1030 19:34:34.263820  435835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:34:34.308537  435835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:34:34.321476  435835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:34:34.321562  435835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:34:34.339178  435835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:34:34.356437  435835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:34:34.382584  435835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:34:34.395885  435835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:34:34.395968  435835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:34:34.404604  435835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:34:34.422906  435835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:34:34.453621  435835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:34:34.468493  435835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:34:34.468597  435835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:34:34.482659  435835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:34:34.500745  435835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:34:34.509027  435835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:34:34.533555  435835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:34:34.580319  435835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:34:34.641961  435835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:34:34.660220  435835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:34:34.691910  435835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:34:34.712151  435835 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-831845 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-831845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.90 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:34:34.712248  435835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:34:34.712344  435835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:34:34.826528  435835 cri.go:89] found id: "acceda3774af86b4098a6770a28fac0cc4a166a34465a77d45ecfaba3fd6c536"
	I1030 19:34:34.826624  435835 cri.go:89] found id: "e7f15816b2798ab5717d97c2236db9b7c4df12e47f606ed1499922825d1d01fa"
	I1030 19:34:34.826642  435835 cri.go:89] found id: "5fc1e3a14d090234fb26c972fe20a01ddd34cabe80d2926ef999901a18b49802"
	I1030 19:34:34.826657  435835 cri.go:89] found id: "656e523c36a3365fb1de076e8cf5fd4eedb05705ed200cbfd7796f69a8794268"
	I1030 19:34:34.826670  435835 cri.go:89] found id: "81760e8d1f38b2c2827eb8faddb7a9d595da87f7ff5755337e4062d1bbd404c6"
	I1030 19:34:34.826704  435835 cri.go:89] found id: "4bdf6f8e20ce9dee7fa0df28a8837c8c02dfa877220c8a10b2e8b789c5e2e137"
	I1030 19:34:34.826724  435835 cri.go:89] found id: "c3cb155cd6df80eceb9a1995ff79d6598434b9b699884e3fdc3dc404f2c6a809"
	I1030 19:34:34.826736  435835 cri.go:89] found id: "cbba80a8dbb06264a30f33c29d03821479ef0bd04cb79649b6e9cae788e46776"
	I1030 19:34:34.826754  435835 cri.go:89] found id: "528de8206d82a9dbb8df48b60a3f6c3274297de167ed048a7995fa61fd7f04ad"
	I1030 19:34:34.826789  435835 cri.go:89] found id: "63aa196c89d2a029b3e9e5348a1079c0ba2b7a74131b3406ebdb88ee143bc1ef"
	I1030 19:34:34.826808  435835 cri.go:89] found id: ""
	I1030 19:34:34.826893  435835 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-831845 -n kubernetes-upgrade-831845
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-831845 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-831845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-831845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-831845: (1.273560032s)
--- FAIL: TestKubernetesUpgrade (432.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (294.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-516975 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1030 19:35:18.709502  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-516975 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m54.677321674s)

                                                
                                                
-- stdout --
	* [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:35:15.638325  439838 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:35:15.638422  439838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:35:15.638431  439838 out.go:358] Setting ErrFile to fd 2...
	I1030 19:35:15.638436  439838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:35:15.638666  439838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:35:15.639248  439838 out.go:352] Setting JSON to false
	I1030 19:35:15.640364  439838 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11859,"bootTime":1730305057,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:35:15.640452  439838 start.go:139] virtualization: kvm guest
	I1030 19:35:15.642847  439838 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:35:15.644062  439838 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:35:15.644072  439838 notify.go:220] Checking for updates...
	I1030 19:35:15.646729  439838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:35:15.648043  439838 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:35:15.649411  439838 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:35:15.650584  439838 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:35:15.651840  439838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:35:15.653555  439838 config.go:182] Loaded profile config "bridge-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:35:15.653785  439838 config.go:182] Loaded profile config "enable-default-cni-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:35:15.653896  439838 config.go:182] Loaded profile config "flannel-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:35:15.654025  439838 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:35:16.269058  439838 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 19:35:16.270498  439838 start.go:297] selected driver: kvm2
	I1030 19:35:16.270522  439838 start.go:901] validating driver "kvm2" against <nil>
	I1030 19:35:16.270540  439838 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:35:16.271554  439838 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:35:16.271701  439838 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:35:16.288596  439838 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:35:16.288657  439838 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 19:35:16.288890  439838 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:35:16.288923  439838 cni.go:84] Creating CNI manager for ""
	I1030 19:35:16.288971  439838 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:35:16.288979  439838 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 19:35:16.289056  439838 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:35:16.289235  439838 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:35:16.291053  439838 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:35:16.292411  439838 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:35:16.292459  439838 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:35:16.292470  439838 cache.go:56] Caching tarball of preloaded images
	I1030 19:35:16.292562  439838 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:35:16.292576  439838 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:35:16.292704  439838 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:35:16.292733  439838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json: {Name:mk8c326ac9540faad5fde7ea1c1d47c4c46dd669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:35:16.292911  439838 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:35:37.601096  439838 start.go:364] duration metric: took 21.308134914s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:35:37.601166  439838 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:35:37.601280  439838 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 19:35:37.603449  439838 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 19:35:37.603651  439838 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:35:37.603716  439838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:35:37.623938  439838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I1030 19:35:37.624391  439838 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:35:37.625043  439838 main.go:141] libmachine: Using API Version  1
	I1030 19:35:37.625072  439838 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:35:37.625526  439838 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:35:37.625755  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:35:37.625950  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:35:37.626129  439838 start.go:159] libmachine.API.Create for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:35:37.626166  439838 client.go:168] LocalClient.Create starting
	I1030 19:35:37.626207  439838 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 19:35:37.626258  439838 main.go:141] libmachine: Decoding PEM data...
	I1030 19:35:37.626280  439838 main.go:141] libmachine: Parsing certificate...
	I1030 19:35:37.626362  439838 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 19:35:37.626392  439838 main.go:141] libmachine: Decoding PEM data...
	I1030 19:35:37.626413  439838 main.go:141] libmachine: Parsing certificate...
	I1030 19:35:37.626439  439838 main.go:141] libmachine: Running pre-create checks...
	I1030 19:35:37.626457  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .PreCreateCheck
	I1030 19:35:37.626865  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:35:37.628700  439838 main.go:141] libmachine: Creating machine...
	I1030 19:35:37.628719  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .Create
	I1030 19:35:37.628872  439838 main.go:141] libmachine: (old-k8s-version-516975) Creating KVM machine...
	I1030 19:35:37.630169  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found existing default KVM network
	I1030 19:35:37.631948  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:37.631760  440189 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:5e:f4} reservation:<nil>}
	I1030 19:35:37.633444  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:37.633349  440189 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c2330}
	I1030 19:35:37.633471  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | created network xml: 
	I1030 19:35:37.633490  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | <network>
	I1030 19:35:37.633503  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |   <name>mk-old-k8s-version-516975</name>
	I1030 19:35:37.633518  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |   <dns enable='no'/>
	I1030 19:35:37.633528  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |   
	I1030 19:35:37.633539  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1030 19:35:37.633547  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |     <dhcp>
	I1030 19:35:37.633570  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1030 19:35:37.633599  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |     </dhcp>
	I1030 19:35:37.633608  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |   </ip>
	I1030 19:35:37.633614  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG |   
	I1030 19:35:37.633644  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | </network>
	I1030 19:35:37.633675  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | 
	I1030 19:35:37.639127  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | trying to create private KVM network mk-old-k8s-version-516975 192.168.50.0/24...
	I1030 19:35:37.728380  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | private KVM network mk-old-k8s-version-516975 192.168.50.0/24 created
	I1030 19:35:37.728416  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:37.728340  440189 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:35:37.728435  439838 main.go:141] libmachine: (old-k8s-version-516975) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975 ...
	I1030 19:35:37.728451  439838 main.go:141] libmachine: (old-k8s-version-516975) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 19:35:37.728574  439838 main.go:141] libmachine: (old-k8s-version-516975) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 19:35:38.042918  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:38.042773  440189 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa...
	I1030 19:35:38.246079  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:38.245567  440189 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/old-k8s-version-516975.rawdisk...
	I1030 19:35:38.246118  439838 main.go:141] libmachine: (old-k8s-version-516975) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975 (perms=drwx------)
	I1030 19:35:38.246132  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Writing magic tar header
	I1030 19:35:38.246148  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Writing SSH key tar header
	I1030 19:35:38.246171  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:38.245694  440189 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975 ...
	I1030 19:35:38.246191  439838 main.go:141] libmachine: (old-k8s-version-516975) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 19:35:38.246206  439838 main.go:141] libmachine: (old-k8s-version-516975) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 19:35:38.246221  439838 main.go:141] libmachine: (old-k8s-version-516975) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 19:35:38.246237  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975
	I1030 19:35:38.246251  439838 main.go:141] libmachine: (old-k8s-version-516975) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 19:35:38.246274  439838 main.go:141] libmachine: (old-k8s-version-516975) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 19:35:38.246289  439838 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:35:38.246304  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 19:35:38.246326  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:35:38.246340  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 19:35:38.246355  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 19:35:38.246367  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Checking permissions on dir: /home/jenkins
	I1030 19:35:38.246380  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Checking permissions on dir: /home
	I1030 19:35:38.246390  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Skipping /home - not owner
	I1030 19:35:38.247076  439838 main.go:141] libmachine: (old-k8s-version-516975) define libvirt domain using xml: 
	I1030 19:35:38.247098  439838 main.go:141] libmachine: (old-k8s-version-516975) <domain type='kvm'>
	I1030 19:35:38.247105  439838 main.go:141] libmachine: (old-k8s-version-516975)   <name>old-k8s-version-516975</name>
	I1030 19:35:38.247110  439838 main.go:141] libmachine: (old-k8s-version-516975)   <memory unit='MiB'>2200</memory>
	I1030 19:35:38.247115  439838 main.go:141] libmachine: (old-k8s-version-516975)   <vcpu>2</vcpu>
	I1030 19:35:38.247126  439838 main.go:141] libmachine: (old-k8s-version-516975)   <features>
	I1030 19:35:38.247137  439838 main.go:141] libmachine: (old-k8s-version-516975)     <acpi/>
	I1030 19:35:38.247144  439838 main.go:141] libmachine: (old-k8s-version-516975)     <apic/>
	I1030 19:35:38.247159  439838 main.go:141] libmachine: (old-k8s-version-516975)     <pae/>
	I1030 19:35:38.247166  439838 main.go:141] libmachine: (old-k8s-version-516975)     
	I1030 19:35:38.247172  439838 main.go:141] libmachine: (old-k8s-version-516975)   </features>
	I1030 19:35:38.247178  439838 main.go:141] libmachine: (old-k8s-version-516975)   <cpu mode='host-passthrough'>
	I1030 19:35:38.247183  439838 main.go:141] libmachine: (old-k8s-version-516975)   
	I1030 19:35:38.247188  439838 main.go:141] libmachine: (old-k8s-version-516975)   </cpu>
	I1030 19:35:38.247193  439838 main.go:141] libmachine: (old-k8s-version-516975)   <os>
	I1030 19:35:38.247199  439838 main.go:141] libmachine: (old-k8s-version-516975)     <type>hvm</type>
	I1030 19:35:38.247227  439838 main.go:141] libmachine: (old-k8s-version-516975)     <boot dev='cdrom'/>
	I1030 19:35:38.247250  439838 main.go:141] libmachine: (old-k8s-version-516975)     <boot dev='hd'/>
	I1030 19:35:38.247279  439838 main.go:141] libmachine: (old-k8s-version-516975)     <bootmenu enable='no'/>
	I1030 19:35:38.247301  439838 main.go:141] libmachine: (old-k8s-version-516975)   </os>
	I1030 19:35:38.247312  439838 main.go:141] libmachine: (old-k8s-version-516975)   <devices>
	I1030 19:35:38.247321  439838 main.go:141] libmachine: (old-k8s-version-516975)     <disk type='file' device='cdrom'>
	I1030 19:35:38.247336  439838 main.go:141] libmachine: (old-k8s-version-516975)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/boot2docker.iso'/>
	I1030 19:35:38.247345  439838 main.go:141] libmachine: (old-k8s-version-516975)       <target dev='hdc' bus='scsi'/>
	I1030 19:35:38.247357  439838 main.go:141] libmachine: (old-k8s-version-516975)       <readonly/>
	I1030 19:35:38.247363  439838 main.go:141] libmachine: (old-k8s-version-516975)     </disk>
	I1030 19:35:38.247374  439838 main.go:141] libmachine: (old-k8s-version-516975)     <disk type='file' device='disk'>
	I1030 19:35:38.247386  439838 main.go:141] libmachine: (old-k8s-version-516975)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 19:35:38.247414  439838 main.go:141] libmachine: (old-k8s-version-516975)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/old-k8s-version-516975.rawdisk'/>
	I1030 19:35:38.247426  439838 main.go:141] libmachine: (old-k8s-version-516975)       <target dev='hda' bus='virtio'/>
	I1030 19:35:38.247433  439838 main.go:141] libmachine: (old-k8s-version-516975)     </disk>
	I1030 19:35:38.247445  439838 main.go:141] libmachine: (old-k8s-version-516975)     <interface type='network'>
	I1030 19:35:38.247456  439838 main.go:141] libmachine: (old-k8s-version-516975)       <source network='mk-old-k8s-version-516975'/>
	I1030 19:35:38.247462  439838 main.go:141] libmachine: (old-k8s-version-516975)       <model type='virtio'/>
	I1030 19:35:38.247478  439838 main.go:141] libmachine: (old-k8s-version-516975)     </interface>
	I1030 19:35:38.247493  439838 main.go:141] libmachine: (old-k8s-version-516975)     <interface type='network'>
	I1030 19:35:38.247500  439838 main.go:141] libmachine: (old-k8s-version-516975)       <source network='default'/>
	I1030 19:35:38.247507  439838 main.go:141] libmachine: (old-k8s-version-516975)       <model type='virtio'/>
	I1030 19:35:38.247515  439838 main.go:141] libmachine: (old-k8s-version-516975)     </interface>
	I1030 19:35:38.247522  439838 main.go:141] libmachine: (old-k8s-version-516975)     <serial type='pty'>
	I1030 19:35:38.247530  439838 main.go:141] libmachine: (old-k8s-version-516975)       <target port='0'/>
	I1030 19:35:38.247537  439838 main.go:141] libmachine: (old-k8s-version-516975)     </serial>
	I1030 19:35:38.247545  439838 main.go:141] libmachine: (old-k8s-version-516975)     <console type='pty'>
	I1030 19:35:38.247553  439838 main.go:141] libmachine: (old-k8s-version-516975)       <target type='serial' port='0'/>
	I1030 19:35:38.247561  439838 main.go:141] libmachine: (old-k8s-version-516975)     </console>
	I1030 19:35:38.247567  439838 main.go:141] libmachine: (old-k8s-version-516975)     <rng model='virtio'>
	I1030 19:35:38.247581  439838 main.go:141] libmachine: (old-k8s-version-516975)       <backend model='random'>/dev/random</backend>
	I1030 19:35:38.247588  439838 main.go:141] libmachine: (old-k8s-version-516975)     </rng>
	I1030 19:35:38.247596  439838 main.go:141] libmachine: (old-k8s-version-516975)     
	I1030 19:35:38.247602  439838 main.go:141] libmachine: (old-k8s-version-516975)     
	I1030 19:35:38.247609  439838 main.go:141] libmachine: (old-k8s-version-516975)   </devices>
	I1030 19:35:38.247615  439838 main.go:141] libmachine: (old-k8s-version-516975) </domain>
	I1030 19:35:38.247625  439838 main.go:141] libmachine: (old-k8s-version-516975) 
	I1030 19:35:38.255543  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:a7:36:f0 in network default
	I1030 19:35:38.256006  439838 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:35:38.256026  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:38.256607  439838 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:35:38.256878  439838 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:35:38.257429  439838 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:35:38.257965  439838 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:35:39.788080  439838 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:35:39.788834  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:39.789414  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:39.789449  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:39.789394  440189 retry.go:31] will retry after 274.773023ms: waiting for machine to come up
	I1030 19:35:40.066020  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:40.066647  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:40.066675  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:40.066553  440189 retry.go:31] will retry after 293.810971ms: waiting for machine to come up
	I1030 19:35:40.362066  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:40.362663  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:40.362695  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:40.362628  440189 retry.go:31] will retry after 446.673319ms: waiting for machine to come up
	I1030 19:35:40.811721  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:40.812730  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:40.812754  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:40.812688  440189 retry.go:31] will retry after 494.797712ms: waiting for machine to come up
	I1030 19:35:41.309398  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:41.309845  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:41.309875  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:41.309804  440189 retry.go:31] will retry after 536.248046ms: waiting for machine to come up
	I1030 19:35:41.847600  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:41.848101  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:41.848144  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:41.848052  440189 retry.go:31] will retry after 739.72635ms: waiting for machine to come up
	I1030 19:35:42.588752  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:42.589324  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:42.589355  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:42.589252  440189 retry.go:31] will retry after 887.271875ms: waiting for machine to come up
	I1030 19:35:43.477919  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:43.478433  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:43.478459  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:43.478388  440189 retry.go:31] will retry after 920.263906ms: waiting for machine to come up
	I1030 19:35:44.400126  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:44.400720  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:44.400747  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:44.400673  440189 retry.go:31] will retry after 1.325564734s: waiting for machine to come up
	I1030 19:35:45.727651  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:45.728256  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:45.728286  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:45.728203  440189 retry.go:31] will retry after 2.033436424s: waiting for machine to come up
	I1030 19:35:47.763449  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:47.763920  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:47.763948  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:47.763863  440189 retry.go:31] will retry after 1.903040044s: waiting for machine to come up
	I1030 19:35:49.668723  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:49.669494  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:49.669531  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:49.669419  440189 retry.go:31] will retry after 2.394921027s: waiting for machine to come up
	I1030 19:35:52.066287  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:52.066913  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:52.066943  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:52.066868  440189 retry.go:31] will retry after 3.619792866s: waiting for machine to come up
	I1030 19:35:55.690995  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:35:55.691521  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:35:55.691543  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:35:55.691481  440189 retry.go:31] will retry after 4.887719082s: waiting for machine to come up
	I1030 19:36:00.580609  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.581335  439838 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:36:00.581361  439838 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:36:00.581374  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.581802  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975
	I1030 19:36:00.663177  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:36:00.663212  439838 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:36:00.663227  439838 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:36:00.666581  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.666962  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:46:32:46}
	I1030 19:36:00.666986  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.667175  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:36:00.667221  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:36:00.667257  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:36:00.667276  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:36:00.667287  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:36:00.795400  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:36:00.795998  439838 main.go:141] libmachine: (old-k8s-version-516975) KVM machine creation complete!
	I1030 19:36:00.796012  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:36:00.796706  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:36:00.796946  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:36:00.797156  439838 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 19:36:00.797175  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:36:00.798798  439838 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 19:36:00.798821  439838 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 19:36:00.798829  439838 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 19:36:00.798837  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:00.801716  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.802120  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:00.802151  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.802318  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:00.802530  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:00.802703  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:00.802869  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:00.803053  439838 main.go:141] libmachine: Using SSH client type: native
	I1030 19:36:00.803351  439838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:36:00.803368  439838 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 19:36:00.910716  439838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:36:00.910743  439838 main.go:141] libmachine: Detecting the provisioner...
	I1030 19:36:00.910755  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:00.914613  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.915103  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:00.915129  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:00.915475  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:00.915631  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:00.915782  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:00.915929  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:00.916103  439838 main.go:141] libmachine: Using SSH client type: native
	I1030 19:36:00.916357  439838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:36:00.916378  439838 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 19:36:01.040455  439838 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 19:36:01.040540  439838 main.go:141] libmachine: found compatible host: buildroot
	I1030 19:36:01.040552  439838 main.go:141] libmachine: Provisioning with buildroot...
	I1030 19:36:01.040559  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:36:01.040865  439838 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:36:01.040906  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:36:01.041659  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:01.044616  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.045130  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:01.045147  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.045414  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:01.045628  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:01.045793  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:01.045977  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:01.046151  439838 main.go:141] libmachine: Using SSH client type: native
	I1030 19:36:01.046373  439838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:36:01.046391  439838 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:36:01.179015  439838 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:36:01.179066  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:01.182471  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.182882  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:01.182920  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.183091  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:01.183303  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:01.183497  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:01.183659  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:01.183819  439838 main.go:141] libmachine: Using SSH client type: native
	I1030 19:36:01.184013  439838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:36:01.184037  439838 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:36:01.301309  439838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:36:01.301346  439838 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:36:01.301383  439838 buildroot.go:174] setting up certificates
	I1030 19:36:01.301393  439838 provision.go:84] configureAuth start
	I1030 19:36:01.301404  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:36:01.301750  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:36:01.304513  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.304838  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:01.304891  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.304959  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:01.307385  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.307787  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:01.307815  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.307973  439838 provision.go:143] copyHostCerts
	I1030 19:36:01.308057  439838 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:36:01.308071  439838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:36:01.308134  439838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:36:01.308258  439838 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:36:01.308271  439838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:36:01.308301  439838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:36:01.308376  439838 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:36:01.308386  439838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:36:01.308413  439838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:36:01.308492  439838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:36:01.582147  439838 provision.go:177] copyRemoteCerts
	I1030 19:36:01.582208  439838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:36:01.582237  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:01.585133  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.585521  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:01.585550  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.585755  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:01.586008  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:01.586199  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:01.586387  439838 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:36:01.665543  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:36:01.694922  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:36:01.722996  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:36:01.748044  439838 provision.go:87] duration metric: took 446.633797ms to configureAuth
	I1030 19:36:01.748087  439838 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:36:01.748286  439838 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:36:01.748382  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:01.751534  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.752006  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:01.752048  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:01.752158  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:01.752367  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:01.752558  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:01.752776  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:01.752981  439838 main.go:141] libmachine: Using SSH client type: native
	I1030 19:36:01.753200  439838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:36:01.753224  439838 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:36:02.003961  439838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:36:02.003996  439838 main.go:141] libmachine: Checking connection to Docker...
	I1030 19:36:02.004025  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetURL
	I1030 19:36:02.005279  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using libvirt version 6000000
	I1030 19:36:02.008071  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.008459  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:02.008502  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.008756  439838 main.go:141] libmachine: Docker is up and running!
	I1030 19:36:02.008782  439838 main.go:141] libmachine: Reticulating splines...
	I1030 19:36:02.008791  439838 client.go:171] duration metric: took 24.382614714s to LocalClient.Create
	I1030 19:36:02.008817  439838 start.go:167] duration metric: took 24.38268969s to libmachine.API.Create "old-k8s-version-516975"
	I1030 19:36:02.008833  439838 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:36:02.008847  439838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:36:02.008872  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:36:02.009154  439838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:36:02.009184  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:02.011632  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.012016  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:02.012041  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.012228  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:02.012407  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:02.012541  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:02.012710  439838 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:36:02.093344  439838 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:36:02.097924  439838 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:36:02.097963  439838 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:36:02.098057  439838 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:36:02.098162  439838 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:36:02.098266  439838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:36:02.107748  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:36:02.139011  439838 start.go:296] duration metric: took 130.163429ms for postStartSetup
	I1030 19:36:02.139106  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:36:02.139820  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:36:02.142574  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.143009  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:02.143049  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.143263  439838 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:36:02.143441  439838 start.go:128] duration metric: took 24.542147727s to createHost
	I1030 19:36:02.143466  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:02.145627  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.145939  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:02.145968  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.146094  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:02.146314  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:02.146505  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:02.146675  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:02.146845  439838 main.go:141] libmachine: Using SSH client type: native
	I1030 19:36:02.147073  439838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:36:02.147092  439838 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:36:02.251502  439838 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730316962.229796007
	
	I1030 19:36:02.251525  439838 fix.go:216] guest clock: 1730316962.229796007
	I1030 19:36:02.251534  439838 fix.go:229] Guest: 2024-10-30 19:36:02.229796007 +0000 UTC Remote: 2024-10-30 19:36:02.143453382 +0000 UTC m=+46.546864691 (delta=86.342625ms)
	I1030 19:36:02.251578  439838 fix.go:200] guest clock delta is within tolerance: 86.342625ms
	I1030 19:36:02.251583  439838 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 24.650457147s
	I1030 19:36:02.251609  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:36:02.251939  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:36:02.255220  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.255675  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:02.255708  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.255882  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:36:02.256409  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:36:02.256614  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:36:02.256700  439838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:36:02.256746  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:02.256871  439838 ssh_runner.go:195] Run: cat /version.json
	I1030 19:36:02.256906  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:36:02.259936  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.260340  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:02.260668  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:02.260751  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.260911  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:02.261466  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:02.261506  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.261693  439838 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:36:02.262107  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:02.262130  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:02.262318  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:36:02.262522  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:36:02.262677  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:36:02.262848  439838 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:36:02.363050  439838 ssh_runner.go:195] Run: systemctl --version
	I1030 19:36:02.372336  439838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:36:02.546210  439838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:36:02.555272  439838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:36:02.555375  439838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:36:02.580485  439838 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:36:02.580513  439838 start.go:495] detecting cgroup driver to use...
	I1030 19:36:02.580613  439838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:36:02.601057  439838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:36:02.616657  439838 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:36:02.616735  439838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:36:02.633728  439838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:36:02.654203  439838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:36:02.791576  439838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:36:02.941502  439838 docker.go:233] disabling docker service ...
	I1030 19:36:02.941577  439838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:36:02.959930  439838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:36:02.972421  439838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:36:03.122326  439838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:36:03.253467  439838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:36:03.267673  439838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:36:03.285897  439838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:36:03.285972  439838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:36:03.296289  439838 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:36:03.296361  439838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:36:03.306903  439838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:36:03.316933  439838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:36:03.326975  439838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:36:03.336925  439838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:36:03.346041  439838 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:36:03.346108  439838 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:36:03.358313  439838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:36:03.369461  439838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:36:03.489523  439838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:36:03.587397  439838 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:36:03.587471  439838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:36:03.592942  439838 start.go:563] Will wait 60s for crictl version
	I1030 19:36:03.593001  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:03.596857  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:36:03.643831  439838 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:36:03.643911  439838 ssh_runner.go:195] Run: crio --version
	I1030 19:36:03.671627  439838 ssh_runner.go:195] Run: crio --version
	I1030 19:36:03.706832  439838 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:36:03.708189  439838 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:36:03.713773  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:03.714201  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:35:54 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:36:03.714228  439838 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:36:03.714471  439838 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:36:03.719837  439838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:36:03.735218  439838 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:36:03.735362  439838 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:36:03.735429  439838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:36:03.767382  439838 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:36:03.767447  439838 ssh_runner.go:195] Run: which lz4
	I1030 19:36:03.771566  439838 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:36:03.775596  439838 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:36:03.775627  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:36:05.420535  439838 crio.go:462] duration metric: took 1.649025845s to copy over tarball
	I1030 19:36:05.420619  439838 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:36:08.241221  439838 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.820572155s)
	I1030 19:36:08.241251  439838 crio.go:469] duration metric: took 2.820681575s to extract the tarball
	I1030 19:36:08.241261  439838 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:36:08.293404  439838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:36:08.344077  439838 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:36:08.344112  439838 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:36:08.344171  439838 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:36:08.344203  439838 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:36:08.344214  439838 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:36:08.344230  439838 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:36:08.344252  439838 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:36:08.344272  439838 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:36:08.344272  439838 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:36:08.344278  439838 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:36:08.346028  439838 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:36:08.346060  439838 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:36:08.346061  439838 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:36:08.346025  439838 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:36:08.346086  439838 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:36:08.346027  439838 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:36:08.346027  439838 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:36:08.346027  439838 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:36:08.504175  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:36:08.506133  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:36:08.509767  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:36:08.512018  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:36:08.514729  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:36:08.518549  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:36:08.552583  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:36:08.664989  439838 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:36:08.665043  439838 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:36:08.665094  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:08.689310  439838 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:36:08.689361  439838 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:36:08.689409  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:08.729591  439838 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:36:08.729644  439838 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:36:08.729694  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:08.731419  439838 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:36:08.731462  439838 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:36:08.731501  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:08.731609  439838 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:36:08.731630  439838 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:36:08.731655  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:08.731718  439838 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:36:08.731736  439838 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:36:08.731759  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:08.751022  439838 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:36:08.751069  439838 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:36:08.751086  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:36:08.751124  439838 ssh_runner.go:195] Run: which crictl
	I1030 19:36:08.751185  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:36:08.751216  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:36:08.751252  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:36:08.751281  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:36:08.751297  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:36:08.889113  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:36:08.889168  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:36:08.889229  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:36:08.916128  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:36:08.916259  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:36:08.916358  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:36:08.917857  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:36:09.041834  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:36:09.041904  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:36:09.041964  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:36:09.115728  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:36:09.115793  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:36:09.119357  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:36:09.119388  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:36:09.198530  439838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:36:09.198648  439838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:36:09.198773  439838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:36:09.284818  439838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:36:09.284892  439838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:36:09.300345  439838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:36:09.300420  439838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:36:09.300470  439838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:36:10.639718  439838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:36:10.805428  439838 cache_images.go:92] duration metric: took 2.461292123s to LoadCachedImages
	W1030 19:36:10.805526  439838 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1030 19:36:10.805544  439838 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:36:10.805670  439838 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:36:10.805756  439838 ssh_runner.go:195] Run: crio config
	I1030 19:36:10.871933  439838 cni.go:84] Creating CNI manager for ""
	I1030 19:36:10.871961  439838 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:36:10.871970  439838 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:36:10.871989  439838 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:36:10.872130  439838 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:36:10.872203  439838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:36:10.883670  439838 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:36:10.883748  439838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:36:10.897944  439838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:36:10.922986  439838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:36:10.953281  439838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:36:10.971018  439838 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:36:10.975008  439838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:36:10.989079  439838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:36:11.120942  439838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:36:11.142898  439838 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:36:11.142925  439838 certs.go:194] generating shared ca certs ...
	I1030 19:36:11.142948  439838 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:36:11.143119  439838 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:36:11.143178  439838 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:36:11.143192  439838 certs.go:256] generating profile certs ...
	I1030 19:36:11.143277  439838 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:36:11.143299  439838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.crt with IP's: []
	I1030 19:36:11.318310  439838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.crt ...
	I1030 19:36:11.318368  439838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.crt: {Name:mk543c4f835106a367e63d380c1773dfd7baf1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:36:11.335585  439838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key ...
	I1030 19:36:11.335633  439838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key: {Name:mk6d4dfa59dcc482f27f74bd5159fa35c86d4564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:36:11.335843  439838 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:36:11.335884  439838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt.685bdf3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.250]
	I1030 19:36:11.417098  439838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt.685bdf3e ...
	I1030 19:36:11.417127  439838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt.685bdf3e: {Name:mk1bc225a2351f0bf76ad2d8ecef72116018e9e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:36:11.417313  439838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e ...
	I1030 19:36:11.417334  439838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e: {Name:mk8077c5c382715017dd5f12dc859c56ed703302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:36:11.417441  439838 certs.go:381] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt.685bdf3e -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt
	I1030 19:36:11.417541  439838 certs.go:385] copying /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e -> /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key
	I1030 19:36:11.417600  439838 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:36:11.417622  439838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt with IP's: []
	I1030 19:36:11.556319  439838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt ...
	I1030 19:36:11.556357  439838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt: {Name:mke1336e0574991778daa7e2956d705ceb1a336b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:36:11.556593  439838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key ...
	I1030 19:36:11.556618  439838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key: {Name:mk38123e69d3d5eefecb2c0498b88773d18e66c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:36:11.556917  439838 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:36:11.556971  439838 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:36:11.556986  439838 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:36:11.557024  439838 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:36:11.557077  439838 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:36:11.557119  439838 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:36:11.557180  439838 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:36:11.558091  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:36:11.592787  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:36:11.624962  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:36:11.655280  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:36:11.685772  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:36:11.711587  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:36:11.737253  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:36:11.767884  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:36:11.802294  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:36:11.835414  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:36:11.868257  439838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:36:11.899972  439838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:36:11.922268  439838 ssh_runner.go:195] Run: openssl version
	I1030 19:36:11.930390  439838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:36:11.944925  439838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:36:11.949772  439838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:36:11.949825  439838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:36:11.956051  439838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:36:11.966403  439838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:36:11.977621  439838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:36:11.982420  439838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:36:11.982494  439838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:36:11.990537  439838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:36:12.003771  439838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:36:12.016314  439838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:36:12.021399  439838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:36:12.021468  439838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:36:12.028405  439838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:36:12.040789  439838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:36:12.046363  439838 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1030 19:36:12.046460  439838 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:36:12.046580  439838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:36:12.046630  439838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:36:12.094995  439838 cri.go:89] found id: ""
	I1030 19:36:12.095089  439838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:36:12.110962  439838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:36:12.122597  439838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:36:12.138606  439838 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:36:12.138629  439838 kubeadm.go:157] found existing configuration files:
	
	I1030 19:36:12.138686  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:36:12.156251  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:36:12.156320  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:36:12.177321  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:36:12.200704  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:36:12.200762  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:36:12.217757  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:36:12.238381  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:36:12.238436  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:36:12.250067  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:36:12.261453  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:36:12.261510  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:36:12.273410  439838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:36:12.449570  439838 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:36:12.449654  439838 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:36:12.662717  439838 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:36:12.662864  439838 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:36:12.662990  439838 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:36:12.914336  439838 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:36:12.919701  439838 out.go:235]   - Generating certificates and keys ...
	I1030 19:36:12.919813  439838 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:36:12.919900  439838 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:36:13.340990  439838 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 19:36:13.510676  439838 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1030 19:36:13.946673  439838 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1030 19:36:14.208097  439838 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1030 19:36:14.412497  439838 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1030 19:36:14.413941  439838 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-516975] and IPs [192.168.50.250 127.0.0.1 ::1]
	I1030 19:36:14.663695  439838 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1030 19:36:14.663910  439838 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-516975] and IPs [192.168.50.250 127.0.0.1 ::1]
	I1030 19:36:14.839299  439838 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 19:36:14.885605  439838 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 19:36:15.094693  439838 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1030 19:36:15.094799  439838 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:36:15.312844  439838 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:36:15.457353  439838 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:36:15.570640  439838 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:36:16.010629  439838 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:36:16.048792  439838 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:36:16.052437  439838 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:36:16.052500  439838 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:36:16.245033  439838 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:36:16.246578  439838 out.go:235]   - Booting up control plane ...
	I1030 19:36:16.246669  439838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:36:16.255368  439838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:36:16.257867  439838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:36:16.261222  439838 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:36:16.276746  439838 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:36:56.277792  439838 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:36:56.278076  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:36:56.278259  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:37:01.279198  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:37:01.279470  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:37:11.279522  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:37:11.279927  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:37:31.281041  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:37:31.281293  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:38:11.280497  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:38:11.280800  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:38:11.280826  439838 kubeadm.go:310] 
	I1030 19:38:11.280883  439838 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:38:11.280955  439838 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:38:11.280972  439838 kubeadm.go:310] 
	I1030 19:38:11.281027  439838 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:38:11.281084  439838 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:38:11.281250  439838 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:38:11.281268  439838 kubeadm.go:310] 
	I1030 19:38:11.281420  439838 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:38:11.281487  439838 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:38:11.281553  439838 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:38:11.281564  439838 kubeadm.go:310] 
	I1030 19:38:11.281724  439838 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:38:11.281832  439838 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:38:11.281850  439838 kubeadm.go:310] 
	I1030 19:38:11.282005  439838 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:38:11.282146  439838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:38:11.282249  439838 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:38:11.282349  439838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:38:11.282362  439838 kubeadm.go:310] 
	I1030 19:38:11.282896  439838 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:38:11.283009  439838 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:38:11.283114  439838 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1030 19:38:11.283276  439838 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-516975] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-516975] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-516975] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-516975] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:38:11.283329  439838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:38:12.824857  439838 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.541487883s)
	I1030 19:38:12.824949  439838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:38:12.839032  439838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:38:12.848831  439838 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:38:12.848852  439838 kubeadm.go:157] found existing configuration files:
	
	I1030 19:38:12.848907  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:38:12.858265  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:38:12.858322  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:38:12.867877  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:38:12.877048  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:38:12.877108  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:38:12.886220  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:38:12.895101  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:38:12.895150  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:38:12.904160  439838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:38:12.912566  439838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:38:12.912610  439838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:38:12.921555  439838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:38:13.135602  439838 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:40:09.634110  439838 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:40:09.634191  439838 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:40:09.635883  439838 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:40:09.635925  439838 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:40:09.635989  439838 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:40:09.636087  439838 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:40:09.636186  439838 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:40:09.636256  439838 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:40:09.640237  439838 out.go:235]   - Generating certificates and keys ...
	I1030 19:40:09.640317  439838 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:40:09.640406  439838 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:40:09.640532  439838 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:40:09.640644  439838 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:40:09.640758  439838 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:40:09.640842  439838 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:40:09.640934  439838 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:40:09.640986  439838 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:40:09.641046  439838 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:40:09.641119  439838 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:40:09.641152  439838 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:40:09.641196  439838 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:40:09.641240  439838 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:40:09.641283  439838 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:40:09.641338  439838 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:40:09.641420  439838 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:40:09.641525  439838 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:40:09.641596  439838 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:40:09.641666  439838 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:40:09.641763  439838 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:40:09.644338  439838 out.go:235]   - Booting up control plane ...
	I1030 19:40:09.644446  439838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:40:09.644517  439838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:40:09.644573  439838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:40:09.644647  439838 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:40:09.644826  439838 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:40:09.644876  439838 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:40:09.644936  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:40:09.645092  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:40:09.645159  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:40:09.645332  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:40:09.645392  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:40:09.645559  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:40:09.645624  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:40:09.645783  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:40:09.645842  439838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:40:09.645991  439838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:40:09.646001  439838 kubeadm.go:310] 
	I1030 19:40:09.646039  439838 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:40:09.646074  439838 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:40:09.646080  439838 kubeadm.go:310] 
	I1030 19:40:09.646109  439838 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:40:09.646150  439838 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:40:09.646243  439838 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:40:09.646249  439838 kubeadm.go:310] 
	I1030 19:40:09.646338  439838 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:40:09.646378  439838 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:40:09.646407  439838 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:40:09.646414  439838 kubeadm.go:310] 
	I1030 19:40:09.646518  439838 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:40:09.646591  439838 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:40:09.646599  439838 kubeadm.go:310] 
	I1030 19:40:09.646689  439838 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:40:09.646762  439838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:40:09.646825  439838 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:40:09.646888  439838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:40:09.646949  439838 kubeadm.go:310] 
	I1030 19:40:09.646953  439838 kubeadm.go:394] duration metric: took 3m57.600498342s to StartCluster
	I1030 19:40:09.647007  439838 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:40:09.647058  439838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:40:09.689369  439838 cri.go:89] found id: ""
	I1030 19:40:09.689401  439838 logs.go:282] 0 containers: []
	W1030 19:40:09.689411  439838 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:40:09.689420  439838 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:40:09.689490  439838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:40:09.724067  439838 cri.go:89] found id: ""
	I1030 19:40:09.724097  439838 logs.go:282] 0 containers: []
	W1030 19:40:09.724106  439838 logs.go:284] No container was found matching "etcd"
	I1030 19:40:09.724113  439838 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:40:09.724175  439838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:40:09.757223  439838 cri.go:89] found id: ""
	I1030 19:40:09.757258  439838 logs.go:282] 0 containers: []
	W1030 19:40:09.757270  439838 logs.go:284] No container was found matching "coredns"
	I1030 19:40:09.757279  439838 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:40:09.757345  439838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:40:09.793239  439838 cri.go:89] found id: ""
	I1030 19:40:09.793273  439838 logs.go:282] 0 containers: []
	W1030 19:40:09.793285  439838 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:40:09.793294  439838 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:40:09.793375  439838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:40:09.824784  439838 cri.go:89] found id: ""
	I1030 19:40:09.824819  439838 logs.go:282] 0 containers: []
	W1030 19:40:09.824831  439838 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:40:09.824839  439838 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:40:09.824898  439838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:40:09.856900  439838 cri.go:89] found id: ""
	I1030 19:40:09.856930  439838 logs.go:282] 0 containers: []
	W1030 19:40:09.856939  439838 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:40:09.856945  439838 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:40:09.856998  439838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:40:09.889361  439838 cri.go:89] found id: ""
	I1030 19:40:09.889404  439838 logs.go:282] 0 containers: []
	W1030 19:40:09.889416  439838 logs.go:284] No container was found matching "kindnet"
	I1030 19:40:09.889430  439838 logs.go:123] Gathering logs for kubelet ...
	I1030 19:40:09.889446  439838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:40:09.939623  439838 logs.go:123] Gathering logs for dmesg ...
	I1030 19:40:09.939654  439838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:40:09.955992  439838 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:40:09.956020  439838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:40:10.117733  439838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:40:10.117756  439838 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:40:10.117771  439838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:40:10.215160  439838 logs.go:123] Gathering logs for container status ...
	I1030 19:40:10.215203  439838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1030 19:40:10.255920  439838 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:40:10.255981  439838 out.go:270] * 
	* 
	W1030 19:40:10.256034  439838 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:40:10.256053  439838 out.go:270] * 
	* 
	W1030 19:40:10.256891  439838 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:40:10.259928  439838 out.go:201] 
	W1030 19:40:10.261143  439838 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:40:10.261187  439838 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:40:10.261207  439838 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:40:10.262620  439838 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-516975 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 6 (222.599728ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:10.535357  446385 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-516975" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (294.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-960512 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-960512 --alsologtostderr -v=3: exit status 82 (2m0.474808648s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-960512"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:37:58.454129  445658 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:37:58.454544  445658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:37:58.454560  445658 out.go:358] Setting ErrFile to fd 2...
	I1030 19:37:58.454568  445658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:37:58.455013  445658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:37:58.455419  445658 out.go:352] Setting JSON to false
	I1030 19:37:58.455523  445658 mustload.go:65] Loading cluster: no-preload-960512
	I1030 19:37:58.456192  445658 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:37:58.456282  445658 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/config.json ...
	I1030 19:37:58.456477  445658 mustload.go:65] Loading cluster: no-preload-960512
	I1030 19:37:58.456619  445658 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:37:58.456660  445658 stop.go:39] StopHost: no-preload-960512
	I1030 19:37:58.457065  445658 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:37:58.457131  445658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:37:58.472636  445658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35919
	I1030 19:37:58.473048  445658 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:37:58.473567  445658 main.go:141] libmachine: Using API Version  1
	I1030 19:37:58.473593  445658 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:37:58.473959  445658 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:37:58.476261  445658 out.go:177] * Stopping node "no-preload-960512"  ...
	I1030 19:37:58.477518  445658 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 19:37:58.477547  445658 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:37:58.477783  445658 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 19:37:58.477816  445658 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:37:58.480742  445658 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:37:58.481145  445658 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:36:19 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:37:58.481168  445658 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:37:58.481384  445658 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:37:58.481562  445658 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:37:58.481710  445658 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:37:58.481821  445658 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:37:58.596192  445658 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 19:37:58.637546  445658 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 19:37:58.673987  445658 main.go:141] libmachine: Stopping "no-preload-960512"...
	I1030 19:37:58.674048  445658 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:37:58.676015  445658 main.go:141] libmachine: (no-preload-960512) Calling .Stop
	I1030 19:37:58.679761  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 0/120
	I1030 19:37:59.681633  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 1/120
	I1030 19:38:00.683021  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 2/120
	I1030 19:38:01.684998  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 3/120
	I1030 19:38:02.686418  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 4/120
	I1030 19:38:03.688717  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 5/120
	I1030 19:38:04.690116  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 6/120
	I1030 19:38:05.691608  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 7/120
	I1030 19:38:06.693495  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 8/120
	I1030 19:38:07.694944  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 9/120
	I1030 19:38:08.697291  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 10/120
	I1030 19:38:09.698775  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 11/120
	I1030 19:38:10.700819  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 12/120
	I1030 19:38:11.702258  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 13/120
	I1030 19:38:12.703579  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 14/120
	I1030 19:38:13.705722  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 15/120
	I1030 19:38:14.707273  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 16/120
	I1030 19:38:15.708734  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 17/120
	I1030 19:38:16.709986  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 18/120
	I1030 19:38:17.711255  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 19/120
	I1030 19:38:18.713243  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 20/120
	I1030 19:38:19.714817  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 21/120
	I1030 19:38:20.716324  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 22/120
	I1030 19:38:21.717646  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 23/120
	I1030 19:38:22.718860  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 24/120
	I1030 19:38:23.720498  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 25/120
	I1030 19:38:24.721804  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 26/120
	I1030 19:38:25.723255  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 27/120
	I1030 19:38:26.724584  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 28/120
	I1030 19:38:27.725965  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 29/120
	I1030 19:38:28.728063  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 30/120
	I1030 19:38:29.729489  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 31/120
	I1030 19:38:30.730955  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 32/120
	I1030 19:38:31.732204  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 33/120
	I1030 19:38:32.733388  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 34/120
	I1030 19:38:33.735245  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 35/120
	I1030 19:38:34.736684  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 36/120
	I1030 19:38:35.738197  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 37/120
	I1030 19:38:36.739687  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 38/120
	I1030 19:38:37.740995  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 39/120
	I1030 19:38:38.743335  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 40/120
	I1030 19:38:39.744992  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 41/120
	I1030 19:38:40.746630  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 42/120
	I1030 19:38:41.748138  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 43/120
	I1030 19:38:42.749553  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 44/120
	I1030 19:38:43.751747  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 45/120
	I1030 19:38:44.753307  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 46/120
	I1030 19:38:45.754816  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 47/120
	I1030 19:38:46.756432  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 48/120
	I1030 19:38:47.757812  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 49/120
	I1030 19:38:48.760434  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 50/120
	I1030 19:38:49.761914  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 51/120
	I1030 19:38:50.763384  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 52/120
	I1030 19:38:51.764878  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 53/120
	I1030 19:38:52.766465  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 54/120
	I1030 19:38:53.768710  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 55/120
	I1030 19:38:54.770207  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 56/120
	I1030 19:38:55.771661  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 57/120
	I1030 19:38:56.773428  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 58/120
	I1030 19:38:57.774879  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 59/120
	I1030 19:38:58.777100  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 60/120
	I1030 19:38:59.778568  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 61/120
	I1030 19:39:00.779845  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 62/120
	I1030 19:39:01.781217  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 63/120
	I1030 19:39:02.783162  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 64/120
	I1030 19:39:03.785018  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 65/120
	I1030 19:39:04.786428  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 66/120
	I1030 19:39:05.787729  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 67/120
	I1030 19:39:06.789454  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 68/120
	I1030 19:39:07.790858  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 69/120
	I1030 19:39:08.793038  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 70/120
	I1030 19:39:09.794436  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 71/120
	I1030 19:39:10.795824  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 72/120
	I1030 19:39:11.797113  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 73/120
	I1030 19:39:12.798655  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 74/120
	I1030 19:39:13.800670  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 75/120
	I1030 19:39:14.801982  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 76/120
	I1030 19:39:15.803504  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 77/120
	I1030 19:39:16.804789  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 78/120
	I1030 19:39:17.806506  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 79/120
	I1030 19:39:18.808106  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 80/120
	I1030 19:39:19.809908  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 81/120
	I1030 19:39:20.811378  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 82/120
	I1030 19:39:21.812819  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 83/120
	I1030 19:39:22.814120  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 84/120
	I1030 19:39:23.816023  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 85/120
	I1030 19:39:24.817348  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 86/120
	I1030 19:39:25.818787  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 87/120
	I1030 19:39:26.821188  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 88/120
	I1030 19:39:27.822404  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 89/120
	I1030 19:39:28.824365  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 90/120
	I1030 19:39:29.825647  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 91/120
	I1030 19:39:30.827074  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 92/120
	I1030 19:39:31.828372  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 93/120
	I1030 19:39:32.829740  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 94/120
	I1030 19:39:33.831704  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 95/120
	I1030 19:39:34.833565  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 96/120
	I1030 19:39:35.834923  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 97/120
	I1030 19:39:36.836134  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 98/120
	I1030 19:39:37.837660  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 99/120
	I1030 19:39:38.839616  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 100/120
	I1030 19:39:39.840884  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 101/120
	I1030 19:39:40.842198  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 102/120
	I1030 19:39:41.843559  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 103/120
	I1030 19:39:42.845404  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 104/120
	I1030 19:39:43.847430  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 105/120
	I1030 19:39:44.848920  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 106/120
	I1030 19:39:45.850261  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 107/120
	I1030 19:39:46.851680  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 108/120
	I1030 19:39:47.853104  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 109/120
	I1030 19:39:48.855414  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 110/120
	I1030 19:39:49.857018  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 111/120
	I1030 19:39:50.858434  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 112/120
	I1030 19:39:51.859847  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 113/120
	I1030 19:39:52.861599  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 114/120
	I1030 19:39:53.863782  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 115/120
	I1030 19:39:54.865380  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 116/120
	I1030 19:39:55.866937  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 117/120
	I1030 19:39:56.868317  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 118/120
	I1030 19:39:57.869839  445658 main.go:141] libmachine: (no-preload-960512) Waiting for machine to stop 119/120
	I1030 19:39:58.871330  445658 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1030 19:39:58.871423  445658 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1030 19:39:58.873680  445658 out.go:201] 
	W1030 19:39:58.875145  445658 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1030 19:39:58.875166  445658 out.go:270] * 
	* 
	W1030 19:39:58.878661  445658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:39:58.880203  445658 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-960512 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512
E1030 19:40:06.268073  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512: exit status 3 (18.653080394s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:17.534855  446290 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	E1030 19:40:17.534880  446290 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-960512" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-768989 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-768989 --alsologtostderr -v=3: exit status 82 (2m0.482257287s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-768989"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:38:07.101467  445759 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:38:07.101790  445759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:38:07.101802  445759 out.go:358] Setting ErrFile to fd 2...
	I1030 19:38:07.101808  445759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:38:07.101998  445759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:38:07.102253  445759 out.go:352] Setting JSON to false
	I1030 19:38:07.102347  445759 mustload.go:65] Loading cluster: default-k8s-diff-port-768989
	I1030 19:38:07.102790  445759 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:38:07.102882  445759 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/config.json ...
	I1030 19:38:07.103068  445759 mustload.go:65] Loading cluster: default-k8s-diff-port-768989
	I1030 19:38:07.103196  445759 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:38:07.103242  445759 stop.go:39] StopHost: default-k8s-diff-port-768989
	I1030 19:38:07.103653  445759 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:38:07.103712  445759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:38:07.118766  445759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I1030 19:38:07.119217  445759 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:38:07.119757  445759 main.go:141] libmachine: Using API Version  1
	I1030 19:38:07.119778  445759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:38:07.120138  445759 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:38:07.122564  445759 out.go:177] * Stopping node "default-k8s-diff-port-768989"  ...
	I1030 19:38:07.123840  445759 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 19:38:07.123867  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:38:07.124161  445759 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 19:38:07.124192  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:38:07.127313  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:38:07.127707  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:37:14 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:38:07.127744  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:38:07.127835  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:38:07.128017  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:38:07.128192  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:38:07.128339  445759 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:38:07.223204  445759 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 19:38:07.287519  445759 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 19:38:07.328333  445759 main.go:141] libmachine: Stopping "default-k8s-diff-port-768989"...
	I1030 19:38:07.328369  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:38:07.330210  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Stop
	I1030 19:38:07.334124  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 0/120
	I1030 19:38:08.335618  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 1/120
	I1030 19:38:09.337108  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 2/120
	I1030 19:38:10.339200  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 3/120
	I1030 19:38:11.341318  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 4/120
	I1030 19:38:12.343433  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 5/120
	I1030 19:38:13.345089  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 6/120
	I1030 19:38:14.346513  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 7/120
	I1030 19:38:15.347882  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 8/120
	I1030 19:38:16.349211  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 9/120
	I1030 19:38:17.351527  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 10/120
	I1030 19:38:18.353115  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 11/120
	I1030 19:38:19.354560  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 12/120
	I1030 19:38:20.355971  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 13/120
	I1030 19:38:21.357481  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 14/120
	I1030 19:38:22.359584  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 15/120
	I1030 19:38:23.360907  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 16/120
	I1030 19:38:24.362203  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 17/120
	I1030 19:38:25.363701  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 18/120
	I1030 19:38:26.365066  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 19/120
	I1030 19:38:27.367530  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 20/120
	I1030 19:38:28.368954  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 21/120
	I1030 19:38:29.370464  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 22/120
	I1030 19:38:30.371933  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 23/120
	I1030 19:38:31.373445  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 24/120
	I1030 19:38:32.375464  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 25/120
	I1030 19:38:33.376777  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 26/120
	I1030 19:38:34.378308  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 27/120
	I1030 19:38:35.379594  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 28/120
	I1030 19:38:36.381218  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 29/120
	I1030 19:38:37.383675  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 30/120
	I1030 19:38:38.385085  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 31/120
	I1030 19:38:39.386666  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 32/120
	I1030 19:38:40.388073  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 33/120
	I1030 19:38:41.389653  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 34/120
	I1030 19:38:42.392080  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 35/120
	I1030 19:38:43.393603  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 36/120
	I1030 19:38:44.395370  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 37/120
	I1030 19:38:45.396738  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 38/120
	I1030 19:38:46.398297  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 39/120
	I1030 19:38:47.399747  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 40/120
	I1030 19:38:48.401229  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 41/120
	I1030 19:38:49.402794  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 42/120
	I1030 19:38:50.404342  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 43/120
	I1030 19:38:51.405825  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 44/120
	I1030 19:38:52.408091  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 45/120
	I1030 19:38:53.409591  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 46/120
	I1030 19:38:54.411103  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 47/120
	I1030 19:38:55.412627  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 48/120
	I1030 19:38:56.414234  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 49/120
	I1030 19:38:57.416533  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 50/120
	I1030 19:38:58.417975  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 51/120
	I1030 19:38:59.419347  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 52/120
	I1030 19:39:00.420788  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 53/120
	I1030 19:39:01.422199  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 54/120
	I1030 19:39:02.424180  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 55/120
	I1030 19:39:03.425615  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 56/120
	I1030 19:39:04.427055  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 57/120
	I1030 19:39:05.428408  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 58/120
	I1030 19:39:06.430024  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 59/120
	I1030 19:39:07.432209  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 60/120
	I1030 19:39:08.433560  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 61/120
	I1030 19:39:09.435009  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 62/120
	I1030 19:39:10.436423  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 63/120
	I1030 19:39:11.437796  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 64/120
	I1030 19:39:12.439671  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 65/120
	I1030 19:39:13.441066  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 66/120
	I1030 19:39:14.442586  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 67/120
	I1030 19:39:15.444063  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 68/120
	I1030 19:39:16.445454  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 69/120
	I1030 19:39:17.447733  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 70/120
	I1030 19:39:18.449303  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 71/120
	I1030 19:39:19.450689  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 72/120
	I1030 19:39:20.452927  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 73/120
	I1030 19:39:21.454511  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 74/120
	I1030 19:39:22.456733  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 75/120
	I1030 19:39:23.458036  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 76/120
	I1030 19:39:24.459446  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 77/120
	I1030 19:39:25.460850  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 78/120
	I1030 19:39:26.462315  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 79/120
	I1030 19:39:27.464460  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 80/120
	I1030 19:39:28.465815  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 81/120
	I1030 19:39:29.467399  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 82/120
	I1030 19:39:30.468821  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 83/120
	I1030 19:39:31.470127  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 84/120
	I1030 19:39:32.472453  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 85/120
	I1030 19:39:33.473813  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 86/120
	I1030 19:39:34.475213  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 87/120
	I1030 19:39:35.477031  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 88/120
	I1030 19:39:36.478523  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 89/120
	I1030 19:39:37.480725  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 90/120
	I1030 19:39:38.482139  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 91/120
	I1030 19:39:39.483489  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 92/120
	I1030 19:39:40.485045  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 93/120
	I1030 19:39:41.486339  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 94/120
	I1030 19:39:42.488430  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 95/120
	I1030 19:39:43.489628  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 96/120
	I1030 19:39:44.490847  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 97/120
	I1030 19:39:45.492082  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 98/120
	I1030 19:39:46.493279  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 99/120
	I1030 19:39:47.495570  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 100/120
	I1030 19:39:48.497014  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 101/120
	I1030 19:39:49.498356  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 102/120
	I1030 19:39:50.500088  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 103/120
	I1030 19:39:51.501334  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 104/120
	I1030 19:39:52.503430  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 105/120
	I1030 19:39:53.505278  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 106/120
	I1030 19:39:54.506640  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 107/120
	I1030 19:39:55.508081  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 108/120
	I1030 19:39:56.509386  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 109/120
	I1030 19:39:57.511534  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 110/120
	I1030 19:39:58.512804  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 111/120
	I1030 19:39:59.514108  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 112/120
	I1030 19:40:00.515475  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 113/120
	I1030 19:40:01.516605  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 114/120
	I1030 19:40:02.518614  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 115/120
	I1030 19:40:03.520037  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 116/120
	I1030 19:40:04.521351  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 117/120
	I1030 19:40:05.522738  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 118/120
	I1030 19:40:06.524144  445759 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for machine to stop 119/120
	I1030 19:40:07.525113  445759 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1030 19:40:07.525172  445759 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1030 19:40:07.527524  445759 out.go:201] 
	W1030 19:40:07.528906  445759 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1030 19:40:07.528930  445759 out.go:270] * 
	* 
	W1030 19:40:07.532293  445759 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:40:07.533573  445759 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-768989 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989: exit status 3 (18.446880127s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:25.982796  446352 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	E1030 19:40:25.982817  446352 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-768989" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-042402 --alsologtostderr -v=3
E1030 19:38:14.713592  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:17.243751  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:19.545123  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:52.513610  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:52.520017  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:52.531431  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:52.552822  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:52.594294  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:52.675694  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:52.837360  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:53.159424  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:53.801073  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:55.083239  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:55.675137  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:38:57.645329  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:02.766872  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:13.009039  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:33.491167  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:41.467288  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:45.773711  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:45.780118  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:45.791536  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:45.812931  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:45.854543  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:45.936018  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:46.097572  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:46.419307  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:47.061019  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:48.342617  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:50.904257  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:39:56.026557  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-042402 --alsologtostderr -v=3: exit status 82 (2m0.48129174s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-042402"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:38:11.506329  445844 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:38:11.506460  445844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:38:11.506470  445844 out.go:358] Setting ErrFile to fd 2...
	I1030 19:38:11.506477  445844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:38:11.506671  445844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:38:11.506940  445844 out.go:352] Setting JSON to false
	I1030 19:38:11.507032  445844 mustload.go:65] Loading cluster: embed-certs-042402
	I1030 19:38:11.507403  445844 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:38:11.507489  445844 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/config.json ...
	I1030 19:38:11.507673  445844 mustload.go:65] Loading cluster: embed-certs-042402
	I1030 19:38:11.507800  445844 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:38:11.507835  445844 stop.go:39] StopHost: embed-certs-042402
	I1030 19:38:11.508254  445844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:38:11.508315  445844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:38:11.524043  445844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I1030 19:38:11.524520  445844 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:38:11.525082  445844 main.go:141] libmachine: Using API Version  1
	I1030 19:38:11.525109  445844 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:38:11.525517  445844 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:38:11.527915  445844 out.go:177] * Stopping node "embed-certs-042402"  ...
	I1030 19:38:11.529229  445844 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1030 19:38:11.529268  445844 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:38:11.529493  445844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1030 19:38:11.529522  445844 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:38:11.532366  445844 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:38:11.532774  445844 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:36:46 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:38:11.532812  445844 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:38:11.532880  445844 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:38:11.533112  445844 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:38:11.533265  445844 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:38:11.533397  445844 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:38:11.641217  445844 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1030 19:38:11.676837  445844 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1030 19:38:11.744190  445844 main.go:141] libmachine: Stopping "embed-certs-042402"...
	I1030 19:38:11.744220  445844 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:38:11.745832  445844 main.go:141] libmachine: (embed-certs-042402) Calling .Stop
	I1030 19:38:11.749129  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 0/120
	I1030 19:38:12.750509  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 1/120
	I1030 19:38:13.752220  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 2/120
	I1030 19:38:14.753480  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 3/120
	I1030 19:38:15.754949  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 4/120
	I1030 19:38:16.756851  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 5/120
	I1030 19:38:17.758193  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 6/120
	I1030 19:38:18.759525  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 7/120
	I1030 19:38:19.760833  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 8/120
	I1030 19:38:20.762171  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 9/120
	I1030 19:38:21.764325  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 10/120
	I1030 19:38:22.765713  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 11/120
	I1030 19:38:23.766883  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 12/120
	I1030 19:38:24.768293  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 13/120
	I1030 19:38:25.770338  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 14/120
	I1030 19:38:26.772096  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 15/120
	I1030 19:38:27.773307  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 16/120
	I1030 19:38:28.775281  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 17/120
	I1030 19:38:29.776626  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 18/120
	I1030 19:38:30.778024  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 19/120
	I1030 19:38:31.780199  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 20/120
	I1030 19:38:32.781763  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 21/120
	I1030 19:38:33.783136  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 22/120
	I1030 19:38:34.784986  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 23/120
	I1030 19:38:35.786382  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 24/120
	I1030 19:38:36.788484  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 25/120
	I1030 19:38:37.789994  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 26/120
	I1030 19:38:38.791421  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 27/120
	I1030 19:38:39.792890  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 28/120
	I1030 19:38:40.794260  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 29/120
	I1030 19:38:41.795790  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 30/120
	I1030 19:38:42.797320  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 31/120
	I1030 19:38:43.798667  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 32/120
	I1030 19:38:44.799951  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 33/120
	I1030 19:38:45.801276  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 34/120
	I1030 19:38:46.803411  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 35/120
	I1030 19:38:47.804824  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 36/120
	I1030 19:38:48.807034  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 37/120
	I1030 19:38:49.809140  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 38/120
	I1030 19:38:50.810371  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 39/120
	I1030 19:38:51.812602  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 40/120
	I1030 19:38:52.813969  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 41/120
	I1030 19:38:53.815207  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 42/120
	I1030 19:38:54.816641  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 43/120
	I1030 19:38:55.818370  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 44/120
	I1030 19:38:56.820398  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 45/120
	I1030 19:38:57.821771  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 46/120
	I1030 19:38:58.823038  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 47/120
	I1030 19:38:59.824322  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 48/120
	I1030 19:39:00.825647  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 49/120
	I1030 19:39:01.827769  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 50/120
	I1030 19:39:02.829058  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 51/120
	I1030 19:39:03.830388  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 52/120
	I1030 19:39:04.831876  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 53/120
	I1030 19:39:05.833223  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 54/120
	I1030 19:39:06.835268  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 55/120
	I1030 19:39:07.836587  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 56/120
	I1030 19:39:08.838012  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 57/120
	I1030 19:39:09.839418  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 58/120
	I1030 19:39:10.840763  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 59/120
	I1030 19:39:11.842815  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 60/120
	I1030 19:39:12.844365  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 61/120
	I1030 19:39:13.845570  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 62/120
	I1030 19:39:14.847257  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 63/120
	I1030 19:39:15.848922  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 64/120
	I1030 19:39:16.850968  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 65/120
	I1030 19:39:17.852323  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 66/120
	I1030 19:39:18.853629  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 67/120
	I1030 19:39:19.855312  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 68/120
	I1030 19:39:20.856701  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 69/120
	I1030 19:39:21.858776  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 70/120
	I1030 19:39:22.860245  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 71/120
	I1030 19:39:23.861522  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 72/120
	I1030 19:39:24.862866  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 73/120
	I1030 19:39:25.864396  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 74/120
	I1030 19:39:26.866788  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 75/120
	I1030 19:39:27.868255  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 76/120
	I1030 19:39:28.869524  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 77/120
	I1030 19:39:29.870867  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 78/120
	I1030 19:39:30.872284  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 79/120
	I1030 19:39:31.874007  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 80/120
	I1030 19:39:32.875402  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 81/120
	I1030 19:39:33.876709  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 82/120
	I1030 19:39:34.878131  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 83/120
	I1030 19:39:35.879664  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 84/120
	I1030 19:39:36.881657  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 85/120
	I1030 19:39:37.882935  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 86/120
	I1030 19:39:38.884302  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 87/120
	I1030 19:39:39.885701  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 88/120
	I1030 19:39:40.887222  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 89/120
	I1030 19:39:41.889378  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 90/120
	I1030 19:39:42.890895  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 91/120
	I1030 19:39:43.892103  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 92/120
	I1030 19:39:44.893434  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 93/120
	I1030 19:39:45.894712  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 94/120
	I1030 19:39:46.896516  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 95/120
	I1030 19:39:47.897917  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 96/120
	I1030 19:39:48.899107  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 97/120
	I1030 19:39:49.900458  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 98/120
	I1030 19:39:50.901943  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 99/120
	I1030 19:39:51.904048  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 100/120
	I1030 19:39:52.905341  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 101/120
	I1030 19:39:53.906565  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 102/120
	I1030 19:39:54.907755  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 103/120
	I1030 19:39:55.909013  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 104/120
	I1030 19:39:56.910782  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 105/120
	I1030 19:39:57.911926  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 106/120
	I1030 19:39:58.913142  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 107/120
	I1030 19:39:59.914548  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 108/120
	I1030 19:40:00.915955  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 109/120
	I1030 19:40:01.917944  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 110/120
	I1030 19:40:02.919224  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 111/120
	I1030 19:40:03.920675  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 112/120
	I1030 19:40:04.922123  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 113/120
	I1030 19:40:05.923471  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 114/120
	I1030 19:40:06.925640  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 115/120
	I1030 19:40:07.927162  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 116/120
	I1030 19:40:08.928619  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 117/120
	I1030 19:40:09.929954  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 118/120
	I1030 19:40:10.931096  445844 main.go:141] libmachine: (embed-certs-042402) Waiting for machine to stop 119/120
	I1030 19:40:11.931946  445844 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1030 19:40:11.932012  445844 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1030 19:40:11.933754  445844 out.go:201] 
	W1030 19:40:11.935207  445844 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1030 19:40:11.935221  445844 out.go:270] * 
	* 
	W1030 19:40:11.938544  445844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:40:11.939878  445844 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-042402 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402
E1030 19:40:14.453255  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402: exit status 3 (18.648851544s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:30.590903  446530 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.235:22: connect: no route to host
	E1030 19:40:30.590931  446530 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.235:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-042402" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-516975 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-516975 create -f testdata/busybox.yaml: exit status 1 (44.04997ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-516975" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-516975 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 6 (214.080253ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:10.794893  446440 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-516975" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 6 (214.456925ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:11.009252  446470 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-516975" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (119.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-516975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-516975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m58.906969633s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-516975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-516975 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-516975 describe deploy/metrics-server -n kube-system: exit status 1 (45.962657ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-516975" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-516975 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 6 (216.772987ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:42:10.178236  447352 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-516975" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (119.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512
E1030 19:40:17.597188  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:18.708788  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512: exit status 3 (3.167657073s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:20.702860  446577 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	E1030 19:40:20.702888  446577 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-960512 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1030 19:40:21.856034  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:21.862428  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:21.873785  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:21.895135  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:21.936581  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:22.018048  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:22.179606  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:22.501365  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:23.142719  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:24.424153  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-960512 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15430607s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-960512 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512
E1030 19:40:26.986366  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512: exit status 3 (3.061347184s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:29.918962  446688 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	E1030 19:40:29.918987  446688 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-960512" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
E1030 19:40:26.749547  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989: exit status 3 (3.167974844s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:29.150886  446658 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	E1030 19:40:29.150912  446658 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-768989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-768989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153607926s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-768989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989: exit status 3 (3.06188363s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:38.366944  446841 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	E1030 19:40:38.366966  446841 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-768989" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402
E1030 19:40:32.108447  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402: exit status 3 (3.167748327s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:33.758823  446792 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.235:22: connect: no route to host
	E1030 19:40:33.758846  446792 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.235:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-042402 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-042402 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153456426s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.235:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-042402 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402
E1030 19:40:42.350419  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402: exit status 3 (3.062461928s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 19:40:42.974904  446920 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.235:22: connect: no route to host
	E1030 19:40:42.974923  446920 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.235:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-042402" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (724.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-516975 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1030 19:42:25.309286  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:42:29.633965  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:42:33.370028  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:42:33.736781  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:43:01.438848  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:43:05.716338  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:43:17.243832  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:43:27.818039  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:43:52.513727  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:43:55.292121  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:44:20.217497  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:44:45.774430  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:45:13.475419  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:45:18.709189  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:45:21.856052  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:45:43.959247  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:45:49.558554  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:46:11.429999  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:46:11.659798  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:46:39.134353  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:46:57.604014  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:47:33.736740  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:48:17.243738  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:48:52.514295  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:49:40.313100  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:49:45.774214  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-516975 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m1.126424506s)

                                                
                                                
-- stdout --
	* [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:42:11.799298  447486 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:42:11.799434  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799444  447486 out.go:358] Setting ErrFile to fd 2...
	I1030 19:42:11.799448  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799628  447486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:42:11.800193  447486 out.go:352] Setting JSON to false
	I1030 19:42:11.801205  447486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12275,"bootTime":1730305057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:42:11.801318  447486 start.go:139] virtualization: kvm guest
	I1030 19:42:11.803677  447486 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:42:11.805274  447486 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:42:11.805300  447486 notify.go:220] Checking for updates...
	I1030 19:42:11.808043  447486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:42:11.809440  447486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:42:11.810604  447486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:42:11.811774  447486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:42:11.812958  447486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:42:11.814552  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:42:11.814994  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.815077  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.830315  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1030 19:42:11.830795  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.831345  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.831365  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.831692  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.831869  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.833718  447486 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:42:11.835019  447486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:42:11.835371  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.835416  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.850097  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1030 19:42:11.850532  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.850964  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.850978  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.851321  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.851541  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.886920  447486 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:42:11.888376  447486 start.go:297] selected driver: kvm2
	I1030 19:42:11.888392  447486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.888538  447486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:42:11.889472  447486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.889560  447486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:42:11.904007  447486 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:42:11.904405  447486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:42:11.904443  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:42:11.904494  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:42:11.904549  447486 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.904661  447486 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.907302  447486 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:42:11.908430  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:42:11.908474  447486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:42:11.908485  447486 cache.go:56] Caching tarball of preloaded images
	I1030 19:42:11.908564  447486 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:42:11.908575  447486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:42:11.908666  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:42:11.908832  447486 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:45:45.631177  447486 start.go:364] duration metric: took 3m33.722307877s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:45:45.631272  447486 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:45.631284  447486 fix.go:54] fixHost starting: 
	I1030 19:45:45.631708  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:45.631767  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:45.648654  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1030 19:45:45.649098  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:45.649552  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:45:45.649574  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:45.649848  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:45.650005  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:45:45.650153  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:45:45.651624  447486 fix.go:112] recreateIfNeeded on old-k8s-version-516975: state=Stopped err=<nil>
	I1030 19:45:45.651661  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	W1030 19:45:45.651805  447486 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:45.654065  447486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	I1030 19:45:45.655382  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .Start
	I1030 19:45:45.655554  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:45:45.656134  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:45:45.656518  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:45:45.656885  447486 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:45:45.657501  447486 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:45:47.003397  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:45:47.004281  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.004710  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.004787  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.004695  448432 retry.go:31] will retry after 234.659459ms: waiting for machine to come up
	I1030 19:45:47.241308  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.241838  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.241863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.241802  448432 retry.go:31] will retry after 350.804975ms: waiting for machine to come up
	I1030 19:45:47.594533  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.595106  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.595139  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.595044  448432 retry.go:31] will retry after 448.637889ms: waiting for machine to come up
	I1030 19:45:48.045858  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.046358  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.046386  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.046315  448432 retry.go:31] will retry after 543.947609ms: waiting for machine to come up
	I1030 19:45:48.592474  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.592908  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.592937  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.592875  448432 retry.go:31] will retry after 744.106735ms: waiting for machine to come up
	I1030 19:45:49.338345  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:49.338833  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:49.338857  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:49.338795  448432 retry.go:31] will retry after 927.743369ms: waiting for machine to come up
	I1030 19:45:50.267844  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:50.268359  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:50.268390  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:50.268324  448432 retry.go:31] will retry after 829.540351ms: waiting for machine to come up
	I1030 19:45:51.099379  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:51.099863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:51.099893  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:51.099820  448432 retry.go:31] will retry after 898.768304ms: waiting for machine to come up
	I1030 19:45:52.000678  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:52.001196  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:52.001235  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:52.001148  448432 retry.go:31] will retry after 1.750749509s: waiting for machine to come up
	I1030 19:45:53.753607  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:53.754013  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:53.754038  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:53.753950  448432 retry.go:31] will retry after 1.537350682s: waiting for machine to come up
	I1030 19:45:55.293910  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:55.294396  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:55.294427  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:55.294336  448432 retry.go:31] will retry after 2.151521323s: waiting for machine to come up
	I1030 19:45:57.447894  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:57.448365  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:57.448392  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:57.448320  448432 retry.go:31] will retry after 2.439938206s: waiting for machine to come up
	I1030 19:45:59.889685  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:59.890166  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:59.890205  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:59.890113  448432 retry.go:31] will retry after 3.836080386s: waiting for machine to come up
	I1030 19:46:03.727617  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728028  447486 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:46:03.728046  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:46:03.728062  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728565  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:46:03.728600  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:46:03.728616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.728639  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | skip adding static IP to network mk-old-k8s-version-516975 - found existing host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"}
	I1030 19:46:03.728657  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:46:03.730754  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731085  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.731121  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731145  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:46:03.731212  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:46:03.731252  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:03.731275  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:46:03.731289  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:46:03.862423  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:03.862832  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:46:03.863519  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:03.865977  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866262  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.866297  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866512  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:46:03.866755  447486 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:03.866783  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:03.866994  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.869079  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869384  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.869410  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869603  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.869787  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.869949  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.870102  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.870243  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.870468  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.870481  447486 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:03.982986  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:03.983018  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983285  447486 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:46:03.983319  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983502  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.986203  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986576  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.986615  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986765  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.986983  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987126  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987258  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.987419  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.987696  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.987719  447486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:46:04.112692  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:46:04.112719  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.115948  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116283  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.116309  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116482  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.116667  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116842  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116966  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.117104  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.117275  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.117290  447486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:04.235988  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:04.236032  447486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:04.236098  447486 buildroot.go:174] setting up certificates
	I1030 19:46:04.236111  447486 provision.go:84] configureAuth start
	I1030 19:46:04.236124  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:04.236500  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:04.239328  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.239707  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.239739  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.240009  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.242118  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242440  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.242505  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242683  447486 provision.go:143] copyHostCerts
	I1030 19:46:04.242766  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:04.242787  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:04.242847  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:04.242972  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:04.242986  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:04.243011  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:04.243072  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:04.243079  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:04.243095  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:04.243153  447486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:46:04.355003  447486 provision.go:177] copyRemoteCerts
	I1030 19:46:04.355061  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:04.355092  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.357788  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358153  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.358191  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358397  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.358630  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.358809  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.358970  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.446614  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:04.473708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:46:04.497721  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:46:04.521806  447486 provision.go:87] duration metric: took 285.682041ms to configureAuth
	I1030 19:46:04.521836  447486 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:04.521999  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:46:04.522072  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.524616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525034  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.525065  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525282  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.525452  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525616  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.525916  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.526129  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.526145  447486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:04.766663  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:04.766697  447486 machine.go:96] duration metric: took 899.924211ms to provisionDockerMachine
	I1030 19:46:04.766709  447486 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:46:04.766720  447486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:04.766745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:04.767081  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:04.767114  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.769995  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770401  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.770428  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770580  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.770762  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.770973  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.771132  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.858006  447486 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:04.862295  447486 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:04.862324  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:04.862387  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:04.862475  447486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:04.862612  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:04.872541  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:04.896306  447486 start.go:296] duration metric: took 129.577956ms for postStartSetup
	I1030 19:46:04.896360  447486 fix.go:56] duration metric: took 19.265077419s for fixHost
	I1030 19:46:04.896383  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.899009  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899397  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.899429  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899538  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.899739  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.899906  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.900101  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.900271  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.900510  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.900525  447486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:05.011439  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317564.967936408
	
	I1030 19:46:05.011464  447486 fix.go:216] guest clock: 1730317564.967936408
	I1030 19:46:05.011472  447486 fix.go:229] Guest: 2024-10-30 19:46:04.967936408 +0000 UTC Remote: 2024-10-30 19:46:04.896364572 +0000 UTC m=+233.135558535 (delta=71.571836ms)
	I1030 19:46:05.011516  447486 fix.go:200] guest clock delta is within tolerance: 71.571836ms
	I1030 19:46:05.011525  447486 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 19.380292064s
	I1030 19:46:05.011552  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.011853  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:05.014722  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015072  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.015100  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015225  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.015808  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016002  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016107  447486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:05.016155  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.016265  447486 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:05.016296  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.018976  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019189  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019326  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019370  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019517  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019604  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019632  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019708  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.019830  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019918  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.019995  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.020077  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.020157  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.020295  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.100852  447486 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:05.127673  447486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:05.279889  447486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:05.285900  447486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:05.285976  447486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:05.304763  447486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:05.304791  447486 start.go:495] detecting cgroup driver to use...
	I1030 19:46:05.304862  447486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:05.325729  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:05.343047  447486 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:05.343128  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:05.358748  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:05.374769  447486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:05.492589  447486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:05.639943  447486 docker.go:233] disabling docker service ...
	I1030 19:46:05.640039  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:05.655449  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:05.669688  447486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:05.814658  447486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:05.957944  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:05.972122  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:05.990577  447486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:46:05.990653  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.000834  447486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:06.000907  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.011678  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.022051  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.032515  447486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:06.043296  447486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:06.053123  447486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:06.053170  447486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:06.067625  447486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:06.081306  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:06.221181  447486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:06.321848  447486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:06.321926  447486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:06.329697  447486 start.go:563] Will wait 60s for crictl version
	I1030 19:46:06.329757  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:06.333980  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:06.381198  447486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:06.381290  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.410365  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.442329  447486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:46:06.443471  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:06.446233  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446621  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:06.446653  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446822  447486 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:06.451216  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:06.464477  447486 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:06.464607  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:46:06.464668  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:06.513123  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:06.513205  447486 ssh_runner.go:195] Run: which lz4
	I1030 19:46:06.517252  447486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:46:06.521358  447486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:46:06.521384  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:46:08.144772  447486 crio.go:462] duration metric: took 1.627547543s to copy over tarball
	I1030 19:46:08.144845  447486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:46:11.104192  447486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959302647s)
	I1030 19:46:11.104228  447486 crio.go:469] duration metric: took 2.959426051s to extract the tarball
	I1030 19:46:11.104240  447486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:46:11.146584  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:11.183766  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:11.183797  447486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:11.183889  447486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.183917  447486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.183932  447486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.183968  447486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.184087  447486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.183972  447486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:46:11.183969  447486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.183928  447486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.185976  447486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.186001  447486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:46:11.186043  447486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.186053  447486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.186046  447486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.185977  447486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.186108  447486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.186150  447486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.348134  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391191  447486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:46:11.391327  447486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391399  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.396693  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.400062  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.406656  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:46:11.410534  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.410590  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.441896  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.460400  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.482465  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.554431  447486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:46:11.554480  447486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.554549  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.610376  447486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:46:11.610424  447486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:46:11.610471  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616060  447486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:46:11.616104  447486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.616153  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616177  447486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:46:11.616217  447486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.616282  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.617473  447486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:46:11.617502  447486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.617535  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652124  447486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:46:11.652185  447486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.652228  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.652233  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652237  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.652331  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.652376  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.652433  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.652483  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.798844  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.798859  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:46:11.798873  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.798949  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.799075  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.799179  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.799182  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.942258  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.942265  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.942365  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.942352  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.942421  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.946933  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:12.064951  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:46:12.067930  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:12.067990  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:46:12.068057  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:46:12.068078  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:46:12.083122  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:46:12.107265  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:46:13.402970  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:13.551979  447486 cache_images.go:92] duration metric: took 2.368158873s to LoadCachedImages
	W1030 19:46:13.552080  447486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:46:13.552096  447486 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:46:13.552211  447486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:13.552276  447486 ssh_runner.go:195] Run: crio config
	I1030 19:46:13.605982  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:46:13.606008  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:13.606020  447486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:13.606049  447486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:46:13.606223  447486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:13.606299  447486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:46:13.616954  447486 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:13.617034  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:13.627440  447486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:46:13.644821  447486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:13.662070  447486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:46:13.679198  447486 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:13.682992  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:13.697879  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:13.819975  447486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:13.838669  447486 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:46:13.838695  447486 certs.go:194] generating shared ca certs ...
	I1030 19:46:13.838716  447486 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:13.838888  447486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:13.838946  447486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:13.838962  447486 certs.go:256] generating profile certs ...
	I1030 19:46:13.839064  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:46:13.839149  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:46:13.839208  447486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:46:13.839375  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:13.839429  447486 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:13.839442  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:13.839476  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:13.839509  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:13.839545  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:13.839609  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:13.840381  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:13.868947  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:13.923848  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:13.973167  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:14.009333  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:46:14.042397  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:14.073927  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:14.109209  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:46:14.135708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:14.162145  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:14.186176  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:14.210362  447486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:14.228727  447486 ssh_runner.go:195] Run: openssl version
	I1030 19:46:14.234436  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:14.245497  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250026  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250077  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.255727  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:14.266674  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:14.277813  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282378  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282435  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.288338  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:14.300057  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:14.312295  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317488  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317555  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.323518  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:14.335182  447486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:14.339998  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:14.346145  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:14.352474  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:14.358687  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:14.364275  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:14.370038  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:14.376051  447486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:14.376144  447486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:14.376187  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.423395  447486 cri.go:89] found id: ""
	I1030 19:46:14.423477  447486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:14.435404  447486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:14.435485  447486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:14.435558  447486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:14.448035  447486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:14.448911  447486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:14.449557  447486 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-516975" cluster setting kubeconfig missing "old-k8s-version-516975" context setting]
	I1030 19:46:14.450419  447486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:14.452252  447486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:14.462634  447486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I1030 19:46:14.462676  447486 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:14.462693  447486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:14.462750  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.508286  447486 cri.go:89] found id: ""
	I1030 19:46:14.508380  447486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:14.527996  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:14.539011  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:14.539037  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:14.539094  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:14.550159  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:14.550243  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:14.561350  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:14.571353  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:14.571430  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:14.584480  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.598307  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:14.598400  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.611632  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:14.621644  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:14.621705  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:14.632161  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:14.642295  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:14.783130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.694839  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.923329  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.052124  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.143607  447486 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:16.143710  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:16.643943  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.144678  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.644772  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.144037  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.644437  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.144273  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.643801  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.144200  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.644764  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.143898  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.643960  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.144625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.644446  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.144207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.644001  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.143787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.644166  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.144397  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.644654  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.144214  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.644275  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.143768  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.644294  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.143819  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.643783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.144405  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.643941  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.644787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.143873  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.643857  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.144229  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.644079  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.643950  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.143888  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.643861  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.144210  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.644677  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.644549  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.144681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.643833  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.143783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.644359  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.144745  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.644625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.144535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.643881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.144754  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.644070  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.144672  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.644533  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.144320  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.644574  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.144465  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.644428  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.143785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.643767  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.144467  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.644496  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.143932  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.644228  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.144124  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.643923  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.144466  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.643968  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.144811  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.643785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.144372  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.644019  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.144732  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.644528  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.144074  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.643889  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.143976  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.644535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.144783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.644114  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.144728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.643846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.143829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.644245  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.144327  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.644684  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.644799  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.144222  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.644111  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.144268  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.644631  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.143881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.644208  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.144411  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.643948  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.644179  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.144791  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.643983  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.143859  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.644436  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.144765  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.644280  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.144381  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.644099  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.144129  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.643864  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.144105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.643752  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.144135  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.644172  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.144391  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.644441  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.143916  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.644779  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.644634  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.144050  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.644738  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:16.143957  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:16.144037  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:16.184282  447486 cri.go:89] found id: ""
	I1030 19:47:16.184310  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.184320  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:16.184327  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:16.184403  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:16.225359  447486 cri.go:89] found id: ""
	I1030 19:47:16.225388  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.225397  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:16.225403  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:16.225471  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:16.260591  447486 cri.go:89] found id: ""
	I1030 19:47:16.260625  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.260635  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:16.260641  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:16.260695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:16.299562  447486 cri.go:89] found id: ""
	I1030 19:47:16.299591  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.299602  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:16.299609  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:16.299682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:16.334753  447486 cri.go:89] found id: ""
	I1030 19:47:16.334781  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.334789  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:16.334795  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:16.334877  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:16.371588  447486 cri.go:89] found id: ""
	I1030 19:47:16.371619  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.371628  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:16.371634  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:16.371689  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:16.406668  447486 cri.go:89] found id: ""
	I1030 19:47:16.406699  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.406710  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:16.406718  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:16.406786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:16.443050  447486 cri.go:89] found id: ""
	I1030 19:47:16.443081  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.443096  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:16.443109  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:16.443125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:16.492898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:16.492936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:16.506310  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:16.506343  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:16.637629  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:16.637660  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:16.637677  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:16.709581  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:16.709621  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:19.253501  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:19.267200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:19.267276  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:19.303608  447486 cri.go:89] found id: ""
	I1030 19:47:19.303641  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.303651  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:19.303658  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:19.303711  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:19.341311  447486 cri.go:89] found id: ""
	I1030 19:47:19.341343  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.341354  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:19.341363  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:19.341427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:19.376949  447486 cri.go:89] found id: ""
	I1030 19:47:19.376977  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.376987  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:19.376996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:19.377075  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:19.414164  447486 cri.go:89] found id: ""
	I1030 19:47:19.414197  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.414209  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:19.414218  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:19.414308  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:19.450637  447486 cri.go:89] found id: ""
	I1030 19:47:19.450671  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.450683  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:19.450692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:19.450761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:19.485315  447486 cri.go:89] found id: ""
	I1030 19:47:19.485345  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.485355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:19.485364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:19.485427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:19.519873  447486 cri.go:89] found id: ""
	I1030 19:47:19.519901  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.519911  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:19.519919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:19.519982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:19.555168  447486 cri.go:89] found id: ""
	I1030 19:47:19.555198  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.555211  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:19.555223  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:19.555239  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:19.607227  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:19.607265  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:19.621465  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:19.621498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:19.700837  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:19.700869  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:19.700882  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:19.774428  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:19.774468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.314410  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:22.327998  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:22.328083  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:22.365583  447486 cri.go:89] found id: ""
	I1030 19:47:22.365611  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.365622  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:22.365631  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:22.365694  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:22.398964  447486 cri.go:89] found id: ""
	I1030 19:47:22.398996  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.399008  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:22.399016  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:22.399092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:22.435132  447486 cri.go:89] found id: ""
	I1030 19:47:22.435169  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.435181  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:22.435188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:22.435252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:22.471510  447486 cri.go:89] found id: ""
	I1030 19:47:22.471544  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.471557  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:22.471574  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:22.471630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:22.509611  447486 cri.go:89] found id: ""
	I1030 19:47:22.509639  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.509647  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:22.509653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:22.509707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:22.546502  447486 cri.go:89] found id: ""
	I1030 19:47:22.546539  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.546552  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:22.546560  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:22.546630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:22.584560  447486 cri.go:89] found id: ""
	I1030 19:47:22.584593  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.584605  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:22.584613  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:22.584676  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:22.621421  447486 cri.go:89] found id: ""
	I1030 19:47:22.621461  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.621474  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:22.621486  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:22.621505  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:22.634998  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:22.635038  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:22.711002  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:22.711028  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:22.711047  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:22.790673  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:22.790712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.831804  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:22.831851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.386915  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:25.399854  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:25.399954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:25.438346  447486 cri.go:89] found id: ""
	I1030 19:47:25.438381  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.438406  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:25.438416  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:25.438500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:25.474888  447486 cri.go:89] found id: ""
	I1030 19:47:25.474915  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.474924  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:25.474931  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:25.474994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:25.511925  447486 cri.go:89] found id: ""
	I1030 19:47:25.511955  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.511966  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:25.511973  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:25.512038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:25.551027  447486 cri.go:89] found id: ""
	I1030 19:47:25.551058  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.551067  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:25.551073  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:25.551144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:25.584736  447486 cri.go:89] found id: ""
	I1030 19:47:25.584764  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.584773  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:25.584779  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:25.584847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:25.632765  447486 cri.go:89] found id: ""
	I1030 19:47:25.632798  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.632810  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:25.632818  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:25.632893  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:25.682501  447486 cri.go:89] found id: ""
	I1030 19:47:25.682528  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.682536  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:25.682543  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:25.682591  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:25.728306  447486 cri.go:89] found id: ""
	I1030 19:47:25.728340  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.728352  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:25.728365  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:25.728397  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.781908  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:25.781944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:25.795864  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:25.795899  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:25.868350  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:25.868378  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:25.868392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:25.944244  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:25.944277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:28.488216  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:28.501481  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:28.501558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:28.536808  447486 cri.go:89] found id: ""
	I1030 19:47:28.536838  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.536849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:28.536857  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:28.536923  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:28.571819  447486 cri.go:89] found id: ""
	I1030 19:47:28.571855  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.571867  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:28.571885  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:28.571966  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:28.605532  447486 cri.go:89] found id: ""
	I1030 19:47:28.605571  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.605582  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:28.605610  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:28.605682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:28.642108  447486 cri.go:89] found id: ""
	I1030 19:47:28.642140  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.642152  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:28.642159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:28.642234  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:28.680036  447486 cri.go:89] found id: ""
	I1030 19:47:28.680065  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.680078  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:28.680086  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:28.680151  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.716135  447486 cri.go:89] found id: ""
	I1030 19:47:28.716162  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.716171  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:28.716177  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:28.716238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:28.752364  447486 cri.go:89] found id: ""
	I1030 19:47:28.752398  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.752406  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:28.752413  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:28.752478  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:28.788396  447486 cri.go:89] found id: ""
	I1030 19:47:28.788434  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.788447  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:28.788461  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:28.788476  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:28.841560  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:28.841595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:28.856134  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:28.856178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:28.930463  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:28.930507  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:28.930527  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:29.013743  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:29.013795  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:31.557942  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:31.573562  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:31.573654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:31.625349  447486 cri.go:89] found id: ""
	I1030 19:47:31.625378  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.625386  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:31.625392  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:31.625442  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:31.689536  447486 cri.go:89] found id: ""
	I1030 19:47:31.689566  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.689574  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:31.689581  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:31.689632  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:31.723758  447486 cri.go:89] found id: ""
	I1030 19:47:31.723794  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.723806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:31.723814  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:31.723890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:31.762671  447486 cri.go:89] found id: ""
	I1030 19:47:31.762698  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.762707  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:31.762713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:31.762761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:31.797658  447486 cri.go:89] found id: ""
	I1030 19:47:31.797686  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.797694  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:31.797702  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:31.797792  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:31.832186  447486 cri.go:89] found id: ""
	I1030 19:47:31.832217  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.832228  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:31.832236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:31.832298  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:31.866820  447486 cri.go:89] found id: ""
	I1030 19:47:31.866853  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.866866  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:31.866875  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:31.866937  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:31.901888  447486 cri.go:89] found id: ""
	I1030 19:47:31.901913  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.901922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:31.901932  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:31.901944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:31.992343  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:31.992380  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:32.030519  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:32.030559  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:32.084442  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:32.084478  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:32.098919  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:32.098954  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:32.171034  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:34.671243  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:34.685879  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:34.685972  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:34.720657  447486 cri.go:89] found id: ""
	I1030 19:47:34.720686  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.720694  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:34.720700  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:34.720757  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:34.759571  447486 cri.go:89] found id: ""
	I1030 19:47:34.759602  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.759615  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:34.759624  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:34.759685  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:34.795273  447486 cri.go:89] found id: ""
	I1030 19:47:34.795313  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.795322  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:34.795329  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:34.795450  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:34.828999  447486 cri.go:89] found id: ""
	I1030 19:47:34.829035  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.829047  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:34.829054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:34.829158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:34.865620  447486 cri.go:89] found id: ""
	I1030 19:47:34.865661  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.865674  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:34.865682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:34.865753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:34.900768  447486 cri.go:89] found id: ""
	I1030 19:47:34.900801  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.900812  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:34.900820  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:34.900889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:34.945023  447486 cri.go:89] found id: ""
	I1030 19:47:34.945048  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.945057  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:34.945063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:34.945118  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:34.980458  447486 cri.go:89] found id: ""
	I1030 19:47:34.980483  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.980492  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:34.980501  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:34.980514  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:35.052570  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:35.052597  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:35.052613  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:35.133825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:35.133869  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:35.176016  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:35.176063  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:35.228866  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:35.228903  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:37.743408  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:37.757472  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:37.757547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:37.794818  447486 cri.go:89] found id: ""
	I1030 19:47:37.794847  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.794856  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:37.794862  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:37.794928  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:37.830025  447486 cri.go:89] found id: ""
	I1030 19:47:37.830064  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.830077  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:37.830086  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:37.830150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:37.864862  447486 cri.go:89] found id: ""
	I1030 19:47:37.864893  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.864902  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:37.864908  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:37.864958  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:37.901650  447486 cri.go:89] found id: ""
	I1030 19:47:37.901699  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.901713  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:37.901722  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:37.901780  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:37.935824  447486 cri.go:89] found id: ""
	I1030 19:47:37.935854  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.935862  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:37.935868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:37.935930  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:37.972774  447486 cri.go:89] found id: ""
	I1030 19:47:37.972805  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.972813  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:37.972819  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:37.972868  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:38.007815  447486 cri.go:89] found id: ""
	I1030 19:47:38.007845  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.007856  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:38.007864  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:38.007931  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:38.042525  447486 cri.go:89] found id: ""
	I1030 19:47:38.042559  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.042571  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:38.042584  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:38.042600  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:38.122022  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:38.122048  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:38.122065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:38.200534  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:38.200575  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:38.240118  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:38.240154  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:38.291936  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:38.291976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:40.806105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:40.821268  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:40.821343  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:40.857151  447486 cri.go:89] found id: ""
	I1030 19:47:40.857186  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.857198  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:40.857207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:40.857266  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:40.893603  447486 cri.go:89] found id: ""
	I1030 19:47:40.893639  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.893648  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:40.893654  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:40.893720  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:40.935294  447486 cri.go:89] found id: ""
	I1030 19:47:40.935330  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.935342  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:40.935349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:40.935418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:40.971509  447486 cri.go:89] found id: ""
	I1030 19:47:40.971536  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.971544  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:40.971550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:40.971610  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:41.009895  447486 cri.go:89] found id: ""
	I1030 19:47:41.009932  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.009941  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:41.009948  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:41.010008  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:41.045170  447486 cri.go:89] found id: ""
	I1030 19:47:41.045208  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.045221  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:41.045229  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:41.045288  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:41.077654  447486 cri.go:89] found id: ""
	I1030 19:47:41.077684  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.077695  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:41.077704  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:41.077771  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:41.111509  447486 cri.go:89] found id: ""
	I1030 19:47:41.111543  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.111552  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:41.111562  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:41.111574  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:41.164939  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:41.164976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:41.178512  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:41.178589  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:41.258783  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:41.258813  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:41.258832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:41.338192  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:41.338230  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:43.878155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:43.892376  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:43.892452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:43.930556  447486 cri.go:89] found id: ""
	I1030 19:47:43.930594  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.930606  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:43.930614  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:43.930679  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:43.970588  447486 cri.go:89] found id: ""
	I1030 19:47:43.970619  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.970630  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:43.970638  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:43.970706  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:44.005467  447486 cri.go:89] found id: ""
	I1030 19:47:44.005497  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.005506  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:44.005512  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:44.005573  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:44.039126  447486 cri.go:89] found id: ""
	I1030 19:47:44.039164  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.039173  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:44.039179  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:44.039239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:44.072961  447486 cri.go:89] found id: ""
	I1030 19:47:44.072994  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.073006  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:44.073014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:44.073109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:44.105864  447486 cri.go:89] found id: ""
	I1030 19:47:44.105891  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.105900  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:44.105907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:44.105956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:44.138198  447486 cri.go:89] found id: ""
	I1030 19:47:44.138240  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.138250  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:44.138264  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:44.138331  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:44.172529  447486 cri.go:89] found id: ""
	I1030 19:47:44.172558  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.172567  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:44.172577  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:44.172594  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:44.248215  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:44.248254  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:44.286169  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:44.286202  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:44.341129  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:44.341167  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:44.354570  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:44.354597  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:44.427790  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:46.928728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:46.943068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:46.943154  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:46.978385  447486 cri.go:89] found id: ""
	I1030 19:47:46.978416  447486 logs.go:282] 0 containers: []
	W1030 19:47:46.978428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:46.978436  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:46.978522  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:47.020413  447486 cri.go:89] found id: ""
	I1030 19:47:47.020457  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.020469  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:47.020476  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:47.020547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:47.061492  447486 cri.go:89] found id: ""
	I1030 19:47:47.061526  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.061538  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:47.061547  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:47.061611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:47.097621  447486 cri.go:89] found id: ""
	I1030 19:47:47.097659  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.097670  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:47.097679  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:47.097739  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:47.131740  447486 cri.go:89] found id: ""
	I1030 19:47:47.131769  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.131779  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:47.131785  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:47.131856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:47.167623  447486 cri.go:89] found id: ""
	I1030 19:47:47.167661  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.167674  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:47.167682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:47.167746  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:47.202299  447486 cri.go:89] found id: ""
	I1030 19:47:47.202328  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.202337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:47.202344  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:47.202401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:47.236652  447486 cri.go:89] found id: ""
	I1030 19:47:47.236686  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.236695  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:47.236704  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:47.236716  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:47.289700  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:47.289740  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:47.304929  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:47.304964  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:47.374811  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:47.374842  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:47.374858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:47.449161  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:47.449196  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:49.989730  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:50.002741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:50.002821  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:50.037602  447486 cri.go:89] found id: ""
	I1030 19:47:50.037636  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.037647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:50.037655  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:50.037724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:50.071346  447486 cri.go:89] found id: ""
	I1030 19:47:50.071383  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.071395  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:50.071405  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:50.071473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:50.106657  447486 cri.go:89] found id: ""
	I1030 19:47:50.106698  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.106711  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:50.106719  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:50.106783  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:50.140974  447486 cri.go:89] found id: ""
	I1030 19:47:50.141012  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.141025  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:50.141032  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:50.141105  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:50.177715  447486 cri.go:89] found id: ""
	I1030 19:47:50.177748  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.177756  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:50.177763  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:50.177824  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:50.212234  447486 cri.go:89] found id: ""
	I1030 19:47:50.212263  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.212272  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:50.212278  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:50.212337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:50.250791  447486 cri.go:89] found id: ""
	I1030 19:47:50.250826  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.250835  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:50.250842  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:50.250908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:50.288575  447486 cri.go:89] found id: ""
	I1030 19:47:50.288607  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.288615  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:50.288628  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:50.288643  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:50.358015  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:50.358039  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:50.358054  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:50.433194  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:50.433235  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:50.473485  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:50.473519  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:50.523581  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:50.523618  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.038393  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:53.052835  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:53.052910  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:53.088797  447486 cri.go:89] found id: ""
	I1030 19:47:53.088828  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.088837  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:53.088843  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:53.088897  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:53.124627  447486 cri.go:89] found id: ""
	I1030 19:47:53.124659  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.124668  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:53.124674  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:53.124724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:53.159127  447486 cri.go:89] found id: ""
	I1030 19:47:53.159163  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.159175  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:53.159183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:53.159244  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:53.191770  447486 cri.go:89] found id: ""
	I1030 19:47:53.191801  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.191810  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:53.191817  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:53.191885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:53.227727  447486 cri.go:89] found id: ""
	I1030 19:47:53.227761  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.227774  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:53.227781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:53.227842  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:53.262937  447486 cri.go:89] found id: ""
	I1030 19:47:53.262969  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.262981  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:53.262989  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:53.263060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:53.296070  447486 cri.go:89] found id: ""
	I1030 19:47:53.296113  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.296124  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:53.296133  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:53.296197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:53.332628  447486 cri.go:89] found id: ""
	I1030 19:47:53.332663  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.332674  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:53.332687  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:53.332702  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:53.385004  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:53.385046  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.400139  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:53.400185  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:53.477792  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:53.477826  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:53.477858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:53.553145  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:53.553186  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:56.094454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:56.107827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:56.107900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:56.141701  447486 cri.go:89] found id: ""
	I1030 19:47:56.141739  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.141751  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:56.141763  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:56.141831  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:56.179973  447486 cri.go:89] found id: ""
	I1030 19:47:56.180003  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.180016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:56.180023  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:56.180099  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:56.220456  447486 cri.go:89] found id: ""
	I1030 19:47:56.220486  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.220496  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:56.220503  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:56.220578  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:56.259699  447486 cri.go:89] found id: ""
	I1030 19:47:56.259727  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.259736  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:56.259741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:56.259791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:56.302726  447486 cri.go:89] found id: ""
	I1030 19:47:56.302762  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.302775  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:56.302783  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:56.302850  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:56.339791  447486 cri.go:89] found id: ""
	I1030 19:47:56.339819  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.339828  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:56.339834  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:56.339889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:56.381291  447486 cri.go:89] found id: ""
	I1030 19:47:56.381325  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.381337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:56.381345  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:56.381401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:56.417150  447486 cri.go:89] found id: ""
	I1030 19:47:56.417182  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.417194  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:56.417207  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:56.417227  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:56.466963  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:56.467005  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:56.481528  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:56.481557  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:56.554843  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:56.554872  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:56.554887  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:56.635798  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:56.635846  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:59.179829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:59.193083  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:59.193160  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:59.231253  447486 cri.go:89] found id: ""
	I1030 19:47:59.231288  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.231302  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:59.231311  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:59.231382  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:59.265982  447486 cri.go:89] found id: ""
	I1030 19:47:59.266013  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.266022  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:59.266028  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:59.266090  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:59.303724  447486 cri.go:89] found id: ""
	I1030 19:47:59.303761  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.303773  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:59.303781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:59.303848  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:59.342137  447486 cri.go:89] found id: ""
	I1030 19:47:59.342163  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.342172  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:59.342180  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:59.342246  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:59.382652  447486 cri.go:89] found id: ""
	I1030 19:47:59.382684  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.382693  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:59.382700  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:59.382761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:59.422428  447486 cri.go:89] found id: ""
	I1030 19:47:59.422454  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.422463  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:59.422469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:59.422539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:59.464047  447486 cri.go:89] found id: ""
	I1030 19:47:59.464079  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.464089  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:59.464095  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:59.464146  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:59.500658  447486 cri.go:89] found id: ""
	I1030 19:47:59.500693  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.500705  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:59.500716  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:59.500732  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:59.554634  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:59.554679  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:59.567956  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:59.567986  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:59.646305  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:59.646332  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:59.646349  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:59.730008  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:59.730052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.274141  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:02.287246  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:02.287320  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:02.322166  447486 cri.go:89] found id: ""
	I1030 19:48:02.322320  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.322336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:02.322346  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:02.322421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:02.358101  447486 cri.go:89] found id: ""
	I1030 19:48:02.358131  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.358140  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:02.358146  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:02.358209  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:02.394812  447486 cri.go:89] found id: ""
	I1030 19:48:02.394898  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.394915  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:02.394924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:02.394990  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:02.429128  447486 cri.go:89] found id: ""
	I1030 19:48:02.429165  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.429177  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:02.429186  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:02.429358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:02.465878  447486 cri.go:89] found id: ""
	I1030 19:48:02.465907  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.465915  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:02.465921  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:02.465973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:02.502758  447486 cri.go:89] found id: ""
	I1030 19:48:02.502794  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.502805  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:02.502813  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:02.502879  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:02.540111  447486 cri.go:89] found id: ""
	I1030 19:48:02.540142  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.540152  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:02.540158  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:02.540222  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:02.574728  447486 cri.go:89] found id: ""
	I1030 19:48:02.574762  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.574774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:02.574787  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:02.574804  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.613333  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:02.613374  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:02.664970  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:02.665013  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:02.679594  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:02.679626  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:02.744184  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:02.744208  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:02.744222  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.326826  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:05.340166  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:05.340232  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:05.376742  447486 cri.go:89] found id: ""
	I1030 19:48:05.376774  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.376789  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:05.376795  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:05.376865  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:05.413981  447486 cri.go:89] found id: ""
	I1030 19:48:05.414026  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.414039  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:05.414047  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:05.414121  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:05.449811  447486 cri.go:89] found id: ""
	I1030 19:48:05.449842  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.449854  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:05.449862  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:05.449925  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:05.502576  447486 cri.go:89] found id: ""
	I1030 19:48:05.502610  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.502622  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:05.502630  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:05.502721  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:05.536747  447486 cri.go:89] found id: ""
	I1030 19:48:05.536778  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.536787  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:05.536793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:05.536857  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:05.570308  447486 cri.go:89] found id: ""
	I1030 19:48:05.570335  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.570344  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:05.570353  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:05.570420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:05.605006  447486 cri.go:89] found id: ""
	I1030 19:48:05.605037  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.605048  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:05.605054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:05.605109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:05.638651  447486 cri.go:89] found id: ""
	I1030 19:48:05.638681  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.638693  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:05.638705  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:05.638720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:05.690734  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:05.690769  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:05.704561  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:05.704588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:05.779426  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:05.779448  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:05.779471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.866320  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:05.866355  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.409454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:08.423687  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:08.423767  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:08.463554  447486 cri.go:89] found id: ""
	I1030 19:48:08.463581  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.463591  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:08.463597  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:08.463654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:08.500159  447486 cri.go:89] found id: ""
	I1030 19:48:08.500186  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.500195  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:08.500200  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:08.500253  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:08.535670  447486 cri.go:89] found id: ""
	I1030 19:48:08.535701  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.535710  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:08.535717  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:08.535785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:08.572921  447486 cri.go:89] found id: ""
	I1030 19:48:08.572958  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.572968  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:08.572975  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:08.573052  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:08.610873  447486 cri.go:89] found id: ""
	I1030 19:48:08.610908  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.610918  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:08.610924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:08.610978  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:08.645430  447486 cri.go:89] found id: ""
	I1030 19:48:08.645458  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.645466  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:08.645475  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:08.645528  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:08.681212  447486 cri.go:89] found id: ""
	I1030 19:48:08.681246  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.681258  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:08.681266  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:08.681332  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:08.716619  447486 cri.go:89] found id: ""
	I1030 19:48:08.716651  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.716661  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:08.716671  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:08.716682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:08.794090  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:08.794134  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.833209  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:08.833251  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:08.884781  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:08.884817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:08.898556  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:08.898586  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:08.967713  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.468230  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:11.482593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:11.482660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:11.518191  447486 cri.go:89] found id: ""
	I1030 19:48:11.518225  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.518235  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:11.518242  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:11.518295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:11.557199  447486 cri.go:89] found id: ""
	I1030 19:48:11.557229  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.557237  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:11.557252  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:11.557323  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:11.595605  447486 cri.go:89] found id: ""
	I1030 19:48:11.595638  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.595650  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:11.595664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:11.595732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:11.634253  447486 cri.go:89] found id: ""
	I1030 19:48:11.634281  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.634295  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:11.634301  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:11.634358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:11.671138  447486 cri.go:89] found id: ""
	I1030 19:48:11.671167  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.671176  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:11.671183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:11.671238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:11.707202  447486 cri.go:89] found id: ""
	I1030 19:48:11.707228  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.707237  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:11.707243  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:11.707302  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:11.745514  447486 cri.go:89] found id: ""
	I1030 19:48:11.745549  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.745561  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:11.745570  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:11.745640  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:11.781403  447486 cri.go:89] found id: ""
	I1030 19:48:11.781438  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.781449  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:11.781458  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:11.781471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:11.832934  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:11.832972  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:11.853498  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:11.853545  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:11.949365  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.949389  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:11.949405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:12.033776  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:12.033823  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.579536  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:14.593497  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:14.593579  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:14.627853  447486 cri.go:89] found id: ""
	I1030 19:48:14.627886  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.627895  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:14.627902  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:14.627953  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:14.662356  447486 cri.go:89] found id: ""
	I1030 19:48:14.662386  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.662398  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:14.662406  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:14.662481  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:14.699334  447486 cri.go:89] found id: ""
	I1030 19:48:14.699370  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.699382  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:14.699390  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:14.699457  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:14.733884  447486 cri.go:89] found id: ""
	I1030 19:48:14.733924  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.733937  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:14.733946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:14.734025  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:14.775208  447486 cri.go:89] found id: ""
	I1030 19:48:14.775240  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.775249  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:14.775256  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:14.775315  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:14.809663  447486 cri.go:89] found id: ""
	I1030 19:48:14.809695  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.809704  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:14.809711  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:14.809778  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:14.844963  447486 cri.go:89] found id: ""
	I1030 19:48:14.844996  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.845006  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:14.845014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:14.845084  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:14.881236  447486 cri.go:89] found id: ""
	I1030 19:48:14.881273  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.881283  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:14.881293  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:14.881305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:14.933792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:14.933830  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:14.948038  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:14.948065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:15.023497  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:15.023519  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:15.023532  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:15.105682  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:15.105741  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:17.646238  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:17.665366  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:17.665455  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:17.707729  447486 cri.go:89] found id: ""
	I1030 19:48:17.707783  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.707796  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:17.707805  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:17.707883  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:17.759922  447486 cri.go:89] found id: ""
	I1030 19:48:17.759959  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.759972  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:17.759980  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:17.760049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:17.807635  447486 cri.go:89] found id: ""
	I1030 19:48:17.807671  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.807683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:17.807695  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:17.807770  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:17.844205  447486 cri.go:89] found id: ""
	I1030 19:48:17.844236  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.844247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:17.844255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:17.844326  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:17.879079  447486 cri.go:89] found id: ""
	I1030 19:48:17.879113  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.879125  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:17.879134  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:17.879202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:17.916548  447486 cri.go:89] found id: ""
	I1030 19:48:17.916584  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.916594  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:17.916601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:17.916654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:17.950597  447486 cri.go:89] found id: ""
	I1030 19:48:17.950626  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.950635  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:17.950640  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:17.950695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:17.985924  447486 cri.go:89] found id: ""
	I1030 19:48:17.985957  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.985968  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:17.985980  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:17.985996  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:18.066211  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:18.066250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:18.107228  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:18.107279  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:18.157508  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:18.157543  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.172208  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:18.172243  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:18.248100  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:20.748681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:20.763369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:20.763445  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:20.804288  447486 cri.go:89] found id: ""
	I1030 19:48:20.804323  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.804336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:20.804343  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:20.804410  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:20.838925  447486 cri.go:89] found id: ""
	I1030 19:48:20.838964  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.838973  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:20.838979  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:20.839030  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:20.873560  447486 cri.go:89] found id: ""
	I1030 19:48:20.873596  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.873608  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:20.873617  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:20.873681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:20.908670  447486 cri.go:89] found id: ""
	I1030 19:48:20.908705  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.908716  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:20.908723  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:20.908791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:20.945901  447486 cri.go:89] found id: ""
	I1030 19:48:20.945929  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.945937  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:20.945943  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:20.945991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:20.980184  447486 cri.go:89] found id: ""
	I1030 19:48:20.980216  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.980227  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:20.980236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:20.980299  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:21.024243  447486 cri.go:89] found id: ""
	I1030 19:48:21.024272  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.024284  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:21.024293  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:21.024366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:21.063315  447486 cri.go:89] found id: ""
	I1030 19:48:21.063348  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.063358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:21.063370  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:21.063387  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:21.130434  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:21.130463  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:21.130480  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:21.209067  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:21.209107  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:21.251005  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:21.251035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:21.303365  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:21.303402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:23.817700  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:23.831060  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:23.831133  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:23.864299  447486 cri.go:89] found id: ""
	I1030 19:48:23.864334  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.864346  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:23.864354  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:23.864420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:23.900815  447486 cri.go:89] found id: ""
	I1030 19:48:23.900844  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.900854  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:23.900869  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:23.900929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:23.939888  447486 cri.go:89] found id: ""
	I1030 19:48:23.939917  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.939928  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:23.939936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:23.939999  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:23.975359  447486 cri.go:89] found id: ""
	I1030 19:48:23.975387  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.975395  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:23.975401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:23.975452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:24.012779  447486 cri.go:89] found id: ""
	I1030 19:48:24.012819  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.012832  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:24.012840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:24.012908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:24.048853  447486 cri.go:89] found id: ""
	I1030 19:48:24.048890  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.048903  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:24.048912  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:24.048979  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:24.084744  447486 cri.go:89] found id: ""
	I1030 19:48:24.084784  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.084797  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:24.084806  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:24.084860  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:24.121719  447486 cri.go:89] found id: ""
	I1030 19:48:24.121757  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.121767  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:24.121777  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:24.121791  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:24.178691  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:24.178733  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:24.192885  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:24.192916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:24.268771  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:24.268815  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:24.268832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:24.349663  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:24.349699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:26.887325  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:26.900480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:26.900558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:26.936157  447486 cri.go:89] found id: ""
	I1030 19:48:26.936188  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.936200  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:26.936207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:26.936278  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:26.975580  447486 cri.go:89] found id: ""
	I1030 19:48:26.975615  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.975626  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:26.975633  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:26.975705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:27.010549  447486 cri.go:89] found id: ""
	I1030 19:48:27.010579  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.010592  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:27.010600  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:27.010659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:27.047505  447486 cri.go:89] found id: ""
	I1030 19:48:27.047541  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.047553  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:27.047561  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:27.047628  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:27.083379  447486 cri.go:89] found id: ""
	I1030 19:48:27.083409  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.083420  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:27.083429  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:27.083492  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:27.117912  447486 cri.go:89] found id: ""
	I1030 19:48:27.117954  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.117967  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:27.117976  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:27.118049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:27.151721  447486 cri.go:89] found id: ""
	I1030 19:48:27.151749  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.151758  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:27.151765  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:27.151817  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:27.188940  447486 cri.go:89] found id: ""
	I1030 19:48:27.188981  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.188989  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:27.188999  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:27.189011  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:27.243926  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:27.243960  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:27.258702  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:27.258731  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:27.326983  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:27.327023  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:27.327041  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:27.410761  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:27.410808  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.953219  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:29.967972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:29.968078  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:30.003975  447486 cri.go:89] found id: ""
	I1030 19:48:30.004004  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.004014  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:30.004023  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:30.004097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:30.041732  447486 cri.go:89] found id: ""
	I1030 19:48:30.041768  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.041780  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:30.041787  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:30.041863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:30.078262  447486 cri.go:89] found id: ""
	I1030 19:48:30.078297  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.078308  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:30.078315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:30.078379  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:30.116100  447486 cri.go:89] found id: ""
	I1030 19:48:30.116137  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.116149  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:30.116157  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:30.116229  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:30.150925  447486 cri.go:89] found id: ""
	I1030 19:48:30.150953  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.150964  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:30.150972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:30.151041  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:30.192188  447486 cri.go:89] found id: ""
	I1030 19:48:30.192219  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.192230  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:30.192237  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:30.192314  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:30.231144  447486 cri.go:89] found id: ""
	I1030 19:48:30.231180  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.231192  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:30.231200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:30.231277  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:30.271198  447486 cri.go:89] found id: ""
	I1030 19:48:30.271228  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.271242  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:30.271265  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:30.271277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:30.322750  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:30.322792  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:30.337745  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:30.337774  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:30.417198  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:30.417224  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:30.417240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:30.503327  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:30.503364  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.047719  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:33.062330  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:33.062395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:33.101049  447486 cri.go:89] found id: ""
	I1030 19:48:33.101088  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.101101  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:33.101108  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:33.101175  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:33.135236  447486 cri.go:89] found id: ""
	I1030 19:48:33.135268  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.135279  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:33.135286  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:33.135357  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:33.169279  447486 cri.go:89] found id: ""
	I1030 19:48:33.169314  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.169325  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:33.169333  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:33.169401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:33.203336  447486 cri.go:89] found id: ""
	I1030 19:48:33.203380  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.203392  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:33.203401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:33.203470  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:33.238223  447486 cri.go:89] found id: ""
	I1030 19:48:33.238258  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.238270  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:33.238279  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:33.238345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:33.272891  447486 cri.go:89] found id: ""
	I1030 19:48:33.272925  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.272937  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:33.272946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:33.273014  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:33.312452  447486 cri.go:89] found id: ""
	I1030 19:48:33.312480  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.312489  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:33.312496  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:33.312547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:33.349041  447486 cri.go:89] found id: ""
	I1030 19:48:33.349076  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.349091  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:33.349104  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:33.349130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:33.430888  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:33.430940  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.469414  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:33.469444  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:33.518989  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:33.519022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:33.532656  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:33.532690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:33.605896  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.106207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:36.120564  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:36.120646  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:36.156854  447486 cri.go:89] found id: ""
	I1030 19:48:36.156887  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.156900  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:36.156909  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:36.156988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:36.195027  447486 cri.go:89] found id: ""
	I1030 19:48:36.195059  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.195072  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:36.195080  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:36.195150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:36.235639  447486 cri.go:89] found id: ""
	I1030 19:48:36.235672  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.235683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:36.235692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:36.235758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:36.281659  447486 cri.go:89] found id: ""
	I1030 19:48:36.281693  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.281702  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:36.281709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:36.281762  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:36.315427  447486 cri.go:89] found id: ""
	I1030 19:48:36.315454  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.315463  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:36.315469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:36.315531  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:36.353084  447486 cri.go:89] found id: ""
	I1030 19:48:36.353110  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.353120  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:36.353126  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:36.353197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:36.388497  447486 cri.go:89] found id: ""
	I1030 19:48:36.388533  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.388545  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:36.388553  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:36.388616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:36.423625  447486 cri.go:89] found id: ""
	I1030 19:48:36.423658  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.423667  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:36.423676  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:36.423691  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:36.476722  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:36.476757  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:36.490669  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:36.490700  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:36.558587  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.558621  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:36.558639  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:36.635606  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:36.635654  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:39.174007  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:39.187709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:39.187786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:39.226131  447486 cri.go:89] found id: ""
	I1030 19:48:39.226165  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.226177  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:39.226185  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:39.226265  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:39.265963  447486 cri.go:89] found id: ""
	I1030 19:48:39.266003  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.266016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:39.266024  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:39.266092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:39.302586  447486 cri.go:89] found id: ""
	I1030 19:48:39.302624  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.302637  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:39.302645  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:39.302710  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:39.347869  447486 cri.go:89] found id: ""
	I1030 19:48:39.347903  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.347916  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:39.347924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:39.347994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:39.384252  447486 cri.go:89] found id: ""
	I1030 19:48:39.384280  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.384288  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:39.384294  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:39.384347  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:39.418847  447486 cri.go:89] found id: ""
	I1030 19:48:39.418876  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.418885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:39.418891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:39.418950  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:39.458408  447486 cri.go:89] found id: ""
	I1030 19:48:39.458454  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.458467  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:39.458480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:39.458567  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:39.493889  447486 cri.go:89] found id: ""
	I1030 19:48:39.493923  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.493934  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:39.493946  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:39.493959  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:39.548692  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:39.548746  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:39.562083  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:39.562110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:39.633822  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:39.633845  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:39.633857  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:39.711765  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:39.711814  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.254337  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:42.268137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:42.268202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:42.303383  447486 cri.go:89] found id: ""
	I1030 19:48:42.303418  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.303428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:42.303434  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:42.303501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:42.349405  447486 cri.go:89] found id: ""
	I1030 19:48:42.349437  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.349447  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:42.349453  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:42.349504  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:42.384317  447486 cri.go:89] found id: ""
	I1030 19:48:42.384353  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.384363  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:42.384369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:42.384424  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:42.418712  447486 cri.go:89] found id: ""
	I1030 19:48:42.418759  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.418768  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:42.418775  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:42.418833  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:42.454234  447486 cri.go:89] found id: ""
	I1030 19:48:42.454270  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.454280  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:42.454288  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:42.454362  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:42.488813  447486 cri.go:89] found id: ""
	I1030 19:48:42.488845  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.488855  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:42.488863  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:42.488929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:42.525883  447486 cri.go:89] found id: ""
	I1030 19:48:42.525917  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.525929  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:42.525938  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:42.526006  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:42.561197  447486 cri.go:89] found id: ""
	I1030 19:48:42.561233  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.561246  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:42.561259  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:42.561275  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.599818  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:42.599854  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:42.654341  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:42.654382  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:42.668163  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:42.668188  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:42.739630  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:42.739659  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:42.739671  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.316154  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:45.330372  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:45.330454  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:45.369093  447486 cri.go:89] found id: ""
	I1030 19:48:45.369125  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.369135  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:45.369141  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:45.369192  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:45.407681  447486 cri.go:89] found id: ""
	I1030 19:48:45.407715  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.407726  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:45.407732  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:45.407787  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:45.444445  447486 cri.go:89] found id: ""
	I1030 19:48:45.444474  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.444482  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:45.444488  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:45.444539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:45.481538  447486 cri.go:89] found id: ""
	I1030 19:48:45.481570  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.481583  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:45.481591  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:45.481654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:45.515088  447486 cri.go:89] found id: ""
	I1030 19:48:45.515123  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.515132  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:45.515139  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:45.515195  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:45.550085  447486 cri.go:89] found id: ""
	I1030 19:48:45.550133  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.550145  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:45.550152  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:45.550214  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:45.583950  447486 cri.go:89] found id: ""
	I1030 19:48:45.583985  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.583999  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:45.584008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:45.584082  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:45.617320  447486 cri.go:89] found id: ""
	I1030 19:48:45.617349  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.617358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:45.617369  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:45.617389  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:45.668792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:45.668833  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:45.683144  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:45.683178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:45.758707  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:45.758732  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:45.758744  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.833807  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:45.833837  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:48.374096  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:48.387812  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:48.387903  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:48.426958  447486 cri.go:89] found id: ""
	I1030 19:48:48.426987  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.426996  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:48.427002  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:48.427051  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:48.462216  447486 cri.go:89] found id: ""
	I1030 19:48:48.462249  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.462260  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:48.462268  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:48.462336  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:48.495666  447486 cri.go:89] found id: ""
	I1030 19:48:48.495699  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.495709  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:48.495716  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:48.495798  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:48.530653  447486 cri.go:89] found id: ""
	I1030 19:48:48.530686  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.530698  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:48.530709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:48.530777  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:48.564788  447486 cri.go:89] found id: ""
	I1030 19:48:48.564826  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.564838  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:48.564846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:48.564921  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:48.600735  447486 cri.go:89] found id: ""
	I1030 19:48:48.600772  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.600784  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:48.600793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:48.600863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:48.637063  447486 cri.go:89] found id: ""
	I1030 19:48:48.637095  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.637107  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:48.637115  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:48.637182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:48.673279  447486 cri.go:89] found id: ""
	I1030 19:48:48.673314  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.673334  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:48.673347  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:48.673362  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:48.724239  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:48.724280  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:48.738390  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:48.738425  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:48.812130  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:48.812155  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:48.812171  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:48.896253  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:48.896298  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.441155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:51.454675  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:51.454751  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:51.490464  447486 cri.go:89] found id: ""
	I1030 19:48:51.490511  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.490523  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:51.490532  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:51.490600  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:51.525364  447486 cri.go:89] found id: ""
	I1030 19:48:51.525399  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.525411  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:51.525419  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:51.525485  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:51.559028  447486 cri.go:89] found id: ""
	I1030 19:48:51.559062  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.559071  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:51.559078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:51.559139  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:51.595188  447486 cri.go:89] found id: ""
	I1030 19:48:51.595217  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.595225  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:51.595231  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:51.595300  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:51.628987  447486 cri.go:89] found id: ""
	I1030 19:48:51.629023  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.629039  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:51.629047  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:51.629119  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:51.663257  447486 cri.go:89] found id: ""
	I1030 19:48:51.663286  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.663295  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:51.663303  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:51.663368  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:51.712562  447486 cri.go:89] found id: ""
	I1030 19:48:51.712600  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.712613  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:51.712622  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:51.712684  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:51.761730  447486 cri.go:89] found id: ""
	I1030 19:48:51.761760  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.761769  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:51.761779  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:51.761794  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:51.775595  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:51.775624  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:51.849120  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:51.849144  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:51.849157  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:51.931364  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:51.931403  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.971195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:51.971229  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:54.525136  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:54.539137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:54.539227  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:54.574281  447486 cri.go:89] found id: ""
	I1030 19:48:54.574316  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.574339  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:54.574348  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:54.574420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:54.611109  447486 cri.go:89] found id: ""
	I1030 19:48:54.611149  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.611161  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:54.611170  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:54.611230  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:54.648396  447486 cri.go:89] found id: ""
	I1030 19:48:54.648428  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.648439  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:54.648447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:54.648510  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:54.683834  447486 cri.go:89] found id: ""
	I1030 19:48:54.683871  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.683884  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:54.683892  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:54.683954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:54.717391  447486 cri.go:89] found id: ""
	I1030 19:48:54.717421  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.717430  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:54.717436  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:54.717495  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:54.753783  447486 cri.go:89] found id: ""
	I1030 19:48:54.753812  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.753821  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:54.753827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:54.753878  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:54.788231  447486 cri.go:89] found id: ""
	I1030 19:48:54.788270  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.788282  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:54.788291  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:54.788359  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:54.823949  447486 cri.go:89] found id: ""
	I1030 19:48:54.823989  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.824001  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:54.824014  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:54.824052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:54.838936  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:54.838967  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:54.911785  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:54.911812  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:54.911825  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:54.993268  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:54.993302  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:55.032557  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:55.032588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.588726  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:57.603010  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:57.603085  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:57.636499  447486 cri.go:89] found id: ""
	I1030 19:48:57.636531  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.636542  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:57.636551  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:57.636624  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:57.671698  447486 cri.go:89] found id: ""
	I1030 19:48:57.671728  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.671739  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:57.671748  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:57.671815  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:57.707387  447486 cri.go:89] found id: ""
	I1030 19:48:57.707414  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.707422  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:57.707431  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:57.707482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:57.745404  447486 cri.go:89] found id: ""
	I1030 19:48:57.745432  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.745440  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:57.745447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:57.745507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:57.784874  447486 cri.go:89] found id: ""
	I1030 19:48:57.784903  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.784912  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:57.784919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:57.784984  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:57.824663  447486 cri.go:89] found id: ""
	I1030 19:48:57.824697  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.824707  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:57.824713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:57.824773  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:57.862542  447486 cri.go:89] found id: ""
	I1030 19:48:57.862581  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.862593  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:57.862601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:57.862669  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:57.897901  447486 cri.go:89] found id: ""
	I1030 19:48:57.897935  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.897947  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:57.897959  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:57.897974  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.951898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:57.951936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:57.966282  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:57.966327  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:58.035515  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:58.035546  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:58.035562  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:58.114825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:58.114876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:00.705537  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:00.719589  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:00.719672  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:00.762299  447486 cri.go:89] found id: ""
	I1030 19:49:00.762330  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.762338  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:00.762356  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:00.762438  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:00.802228  447486 cri.go:89] found id: ""
	I1030 19:49:00.802259  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.802268  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:00.802275  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:00.802345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:00.836531  447486 cri.go:89] found id: ""
	I1030 19:49:00.836557  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.836565  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:00.836572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:00.836630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:00.869332  447486 cri.go:89] found id: ""
	I1030 19:49:00.869360  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.869369  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:00.869375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:00.869437  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:00.904643  447486 cri.go:89] found id: ""
	I1030 19:49:00.904675  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.904684  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:00.904691  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:00.904768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:00.939020  447486 cri.go:89] found id: ""
	I1030 19:49:00.939050  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.939061  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:00.939068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:00.939142  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:00.974586  447486 cri.go:89] found id: ""
	I1030 19:49:00.974625  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.974638  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:00.974646  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:00.974707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:01.009337  447486 cri.go:89] found id: ""
	I1030 19:49:01.009375  447486 logs.go:282] 0 containers: []
	W1030 19:49:01.009386  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:01.009399  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:01.009416  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:01.067087  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:01.067125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:01.081681  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:01.081713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:01.153057  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:01.153082  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:01.153096  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:01.236113  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:01.236153  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:03.774056  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:03.788395  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:03.788482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:03.823847  447486 cri.go:89] found id: ""
	I1030 19:49:03.823880  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.823892  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:03.823900  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:03.823973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:03.864776  447486 cri.go:89] found id: ""
	I1030 19:49:03.864807  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.864819  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:03.864827  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:03.864890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:03.912516  447486 cri.go:89] found id: ""
	I1030 19:49:03.912572  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.912585  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:03.912593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:03.912660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:03.962459  447486 cri.go:89] found id: ""
	I1030 19:49:03.962509  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.962521  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:03.962530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:03.962602  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:04.019107  447486 cri.go:89] found id: ""
	I1030 19:49:04.019143  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.019152  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:04.019159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:04.019217  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:04.054016  447486 cri.go:89] found id: ""
	I1030 19:49:04.054047  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.054056  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:04.054063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:04.054140  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:04.089907  447486 cri.go:89] found id: ""
	I1030 19:49:04.089938  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.089948  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:04.089955  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:04.090007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:04.128081  447486 cri.go:89] found id: ""
	I1030 19:49:04.128110  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.128118  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:04.128128  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:04.128142  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:04.182419  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:04.182462  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:04.196909  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:04.196941  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:04.267267  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:04.267298  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:04.267317  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:04.346826  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:04.346876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:06.887266  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:06.902462  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:06.902554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:06.938850  447486 cri.go:89] found id: ""
	I1030 19:49:06.938880  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.938891  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:06.938899  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:06.938961  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:06.983284  447486 cri.go:89] found id: ""
	I1030 19:49:06.983315  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.983330  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:06.983339  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:06.983406  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:07.016332  447486 cri.go:89] found id: ""
	I1030 19:49:07.016359  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.016369  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:07.016375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:07.016428  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:07.051425  447486 cri.go:89] found id: ""
	I1030 19:49:07.051459  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.051471  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:07.051480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:07.051550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:07.083396  447486 cri.go:89] found id: ""
	I1030 19:49:07.083429  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.083437  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:07.083444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:07.083507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:07.116616  447486 cri.go:89] found id: ""
	I1030 19:49:07.116646  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.116654  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:07.116661  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:07.116728  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:07.149219  447486 cri.go:89] found id: ""
	I1030 19:49:07.149251  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.149259  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:07.149265  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:07.149318  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:07.188404  447486 cri.go:89] found id: ""
	I1030 19:49:07.188435  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.188444  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:07.188454  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:07.188468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:07.247600  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:07.247640  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:07.262196  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:07.262231  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:07.332998  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:07.333031  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:07.333048  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:07.415322  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:07.415367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:09.958278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:09.972983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:09.973068  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:10.016768  447486 cri.go:89] found id: ""
	I1030 19:49:10.016801  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.016810  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:10.016818  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:10.016885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:10.052958  447486 cri.go:89] found id: ""
	I1030 19:49:10.052992  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.053002  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:10.053009  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:10.053063  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:10.089062  447486 cri.go:89] found id: ""
	I1030 19:49:10.089094  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.089105  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:10.089120  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:10.089196  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:10.126084  447486 cri.go:89] found id: ""
	I1030 19:49:10.126114  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.126123  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:10.126130  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:10.126182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:10.171670  447486 cri.go:89] found id: ""
	I1030 19:49:10.171702  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.171712  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:10.171720  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:10.171785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:10.210243  447486 cri.go:89] found id: ""
	I1030 19:49:10.210285  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.210293  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:10.210300  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:10.210366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:10.253012  447486 cri.go:89] found id: ""
	I1030 19:49:10.253056  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.253069  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:10.253078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:10.253155  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:10.287948  447486 cri.go:89] found id: ""
	I1030 19:49:10.287999  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.288009  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:10.288021  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:10.288036  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:10.341362  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:10.341405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:10.355769  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:10.355798  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:10.429469  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:10.429500  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:10.429518  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:10.509812  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:10.509851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:13.053064  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:13.069063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:13.069136  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:13.108457  447486 cri.go:89] found id: ""
	I1030 19:49:13.108492  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.108505  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:13.108513  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:13.108582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:13.146481  447486 cri.go:89] found id: ""
	I1030 19:49:13.146523  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.146534  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:13.146542  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:13.146595  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:13.187088  447486 cri.go:89] found id: ""
	I1030 19:49:13.187118  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.187129  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:13.187137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:13.187200  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:13.226913  447486 cri.go:89] found id: ""
	I1030 19:49:13.226948  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.226960  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:13.226968  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:13.227038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:13.262632  447486 cri.go:89] found id: ""
	I1030 19:49:13.262661  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.262669  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:13.262676  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:13.262726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:13.296877  447486 cri.go:89] found id: ""
	I1030 19:49:13.296906  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.296915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:13.296922  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:13.296983  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:13.334907  447486 cri.go:89] found id: ""
	I1030 19:49:13.334939  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.334949  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:13.334956  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:13.335021  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:13.369386  447486 cri.go:89] found id: ""
	I1030 19:49:13.369430  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.369443  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:13.369456  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:13.369472  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:13.423095  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:13.423130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:13.437039  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:13.437067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:13.512619  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:13.512648  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:13.512663  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:13.596982  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:13.597023  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:16.135623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:16.150407  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:16.150502  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:16.188771  447486 cri.go:89] found id: ""
	I1030 19:49:16.188811  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.188823  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:16.188832  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:16.188907  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:16.221554  447486 cri.go:89] found id: ""
	I1030 19:49:16.221589  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.221598  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:16.221604  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:16.221655  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:16.255567  447486 cri.go:89] found id: ""
	I1030 19:49:16.255595  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.255609  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:16.255616  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:16.255667  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:16.289820  447486 cri.go:89] found id: ""
	I1030 19:49:16.289855  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.289866  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:16.289874  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:16.289935  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:16.324415  447486 cri.go:89] found id: ""
	I1030 19:49:16.324449  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.324464  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:16.324471  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:16.324533  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:16.360789  447486 cri.go:89] found id: ""
	I1030 19:49:16.360825  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.360848  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:16.360856  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:16.360922  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:16.395066  447486 cri.go:89] found id: ""
	I1030 19:49:16.395093  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.395101  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:16.395107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:16.395158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:16.429220  447486 cri.go:89] found id: ""
	I1030 19:49:16.429261  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.429273  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:16.429286  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:16.429305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:16.481209  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:16.481250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:16.495353  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:16.495383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:16.563979  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:16.564006  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:16.564022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:16.645166  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:16.645205  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.185478  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:19.199270  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:19.199337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:19.242426  447486 cri.go:89] found id: ""
	I1030 19:49:19.242455  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.242464  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:19.242474  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:19.242556  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:19.284061  447486 cri.go:89] found id: ""
	I1030 19:49:19.284092  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.284102  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:19.284108  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:19.284178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:19.317373  447486 cri.go:89] found id: ""
	I1030 19:49:19.317407  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.317420  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:19.317428  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:19.317491  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:19.354222  447486 cri.go:89] found id: ""
	I1030 19:49:19.354250  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.354259  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:19.354267  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:19.354329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:19.392948  447486 cri.go:89] found id: ""
	I1030 19:49:19.392980  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.392989  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:19.392996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:19.393053  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:19.438023  447486 cri.go:89] found id: ""
	I1030 19:49:19.438055  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.438066  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:19.438074  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:19.438144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:19.472179  447486 cri.go:89] found id: ""
	I1030 19:49:19.472208  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.472218  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:19.472226  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:19.472283  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:19.507164  447486 cri.go:89] found id: ""
	I1030 19:49:19.507195  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.507203  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:19.507213  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:19.507226  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:19.520898  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:19.520935  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:19.592204  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:19.592234  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:19.592263  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:19.668994  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:19.669045  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.707208  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:19.707240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:22.263035  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:22.276999  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:22.277089  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:22.310969  447486 cri.go:89] found id: ""
	I1030 19:49:22.311006  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.311017  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:22.311026  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:22.311097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:22.346282  447486 cri.go:89] found id: ""
	I1030 19:49:22.346311  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.346324  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:22.346332  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:22.346401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:22.384324  447486 cri.go:89] found id: ""
	I1030 19:49:22.384354  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.384372  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:22.384381  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:22.384441  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:22.419465  447486 cri.go:89] found id: ""
	I1030 19:49:22.419498  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.419509  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:22.419518  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:22.419582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:22.456161  447486 cri.go:89] found id: ""
	I1030 19:49:22.456196  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.456204  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:22.456211  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:22.456280  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:22.489075  447486 cri.go:89] found id: ""
	I1030 19:49:22.489102  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.489110  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:22.489119  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:22.489181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:22.521752  447486 cri.go:89] found id: ""
	I1030 19:49:22.521780  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.521789  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:22.521796  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:22.521847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:22.554946  447486 cri.go:89] found id: ""
	I1030 19:49:22.554985  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.554997  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:22.555010  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:22.555025  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:22.567877  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:22.567909  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:22.640062  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:22.640094  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:22.640110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:22.714946  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:22.714985  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:22.755560  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:22.755595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.306379  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:25.320883  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:25.320963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:25.356737  447486 cri.go:89] found id: ""
	I1030 19:49:25.356771  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.356782  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:25.356791  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:25.356856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:25.393371  447486 cri.go:89] found id: ""
	I1030 19:49:25.393409  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.393420  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:25.393429  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:25.393500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:25.428379  447486 cri.go:89] found id: ""
	I1030 19:49:25.428411  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.428425  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:25.428433  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:25.428505  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:25.473516  447486 cri.go:89] found id: ""
	I1030 19:49:25.473551  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.473562  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:25.473572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:25.473649  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:25.512508  447486 cri.go:89] found id: ""
	I1030 19:49:25.512535  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.512544  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:25.512550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:25.512611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:25.547646  447486 cri.go:89] found id: ""
	I1030 19:49:25.547691  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.547705  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:25.547713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:25.547782  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:25.582314  447486 cri.go:89] found id: ""
	I1030 19:49:25.582347  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.582356  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:25.582364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:25.582415  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:25.617305  447486 cri.go:89] found id: ""
	I1030 19:49:25.617343  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.617354  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:25.617367  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:25.617383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:25.658245  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:25.658283  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.710559  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:25.710598  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:25.724961  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:25.724995  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:25.796252  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:25.796283  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:25.796300  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.374633  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:28.389468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:28.389549  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:28.425747  447486 cri.go:89] found id: ""
	I1030 19:49:28.425780  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.425792  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:28.425800  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:28.425956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:28.465221  447486 cri.go:89] found id: ""
	I1030 19:49:28.465258  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.465291  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:28.465303  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:28.465371  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:28.504184  447486 cri.go:89] found id: ""
	I1030 19:49:28.504217  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.504230  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:28.504240  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:28.504295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:28.536198  447486 cri.go:89] found id: ""
	I1030 19:49:28.536234  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.536247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:28.536255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:28.536340  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:28.572194  447486 cri.go:89] found id: ""
	I1030 19:49:28.572228  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.572240  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:28.572248  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:28.572312  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:28.608794  447486 cri.go:89] found id: ""
	I1030 19:49:28.608826  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.608838  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:28.608846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:28.608914  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:28.641664  447486 cri.go:89] found id: ""
	I1030 19:49:28.641698  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.641706  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:28.641714  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:28.641768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:28.675756  447486 cri.go:89] found id: ""
	I1030 19:49:28.675790  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.675800  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:28.675812  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:28.675829  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:28.690203  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:28.690237  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:28.755647  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:28.755674  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:28.755690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.837116  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:28.837149  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:28.877195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:28.877232  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.428091  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:31.442537  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:31.442619  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:31.479911  447486 cri.go:89] found id: ""
	I1030 19:49:31.479942  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.479953  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:31.479961  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:31.480029  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:31.517015  447486 cri.go:89] found id: ""
	I1030 19:49:31.517042  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.517050  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:31.517056  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:31.517107  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:31.549858  447486 cri.go:89] found id: ""
	I1030 19:49:31.549891  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.549900  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:31.549907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:31.549971  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:31.583490  447486 cri.go:89] found id: ""
	I1030 19:49:31.583524  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.583536  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:31.583551  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:31.583618  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:31.618270  447486 cri.go:89] found id: ""
	I1030 19:49:31.618308  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.618320  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:31.618328  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:31.618397  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:31.655416  447486 cri.go:89] found id: ""
	I1030 19:49:31.655448  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.655460  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:31.655468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:31.655530  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:31.689708  447486 cri.go:89] found id: ""
	I1030 19:49:31.689740  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.689751  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:31.689759  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:31.689823  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:31.724179  447486 cri.go:89] found id: ""
	I1030 19:49:31.724208  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.724219  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:31.724233  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:31.724249  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.774900  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:31.774939  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:31.788606  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:31.788635  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:31.861360  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:31.861385  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:31.861398  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:31.935856  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:31.935896  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.477313  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:34.491530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:34.491597  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:34.525105  447486 cri.go:89] found id: ""
	I1030 19:49:34.525136  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.525145  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:34.525153  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:34.525215  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:34.560449  447486 cri.go:89] found id: ""
	I1030 19:49:34.560483  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.560495  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:34.560503  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:34.560558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:34.595278  447486 cri.go:89] found id: ""
	I1030 19:49:34.595325  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.595335  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:34.595342  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:34.595395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:34.628486  447486 cri.go:89] found id: ""
	I1030 19:49:34.628521  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.628533  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:34.628542  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:34.628614  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:34.663410  447486 cri.go:89] found id: ""
	I1030 19:49:34.663438  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.663448  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:34.663456  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:34.663520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:34.697053  447486 cri.go:89] found id: ""
	I1030 19:49:34.697086  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.697099  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:34.697107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:34.697178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:34.730910  447486 cri.go:89] found id: ""
	I1030 19:49:34.730943  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.730955  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:34.730963  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:34.731034  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:34.765725  447486 cri.go:89] found id: ""
	I1030 19:49:34.765762  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.765774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:34.765786  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:34.765807  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.802750  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:34.802786  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:34.853576  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:34.853614  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:34.868102  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:34.868139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:34.939985  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:34.940015  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:34.940027  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:37.516479  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:37.529386  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:37.529453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:37.565889  447486 cri.go:89] found id: ""
	I1030 19:49:37.565923  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.565936  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:37.565945  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:37.566007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:37.598771  447486 cri.go:89] found id: ""
	I1030 19:49:37.598801  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.598811  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:37.598817  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:37.598869  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:37.632678  447486 cri.go:89] found id: ""
	I1030 19:49:37.632705  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.632714  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:37.632735  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:37.632795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:37.666642  447486 cri.go:89] found id: ""
	I1030 19:49:37.666673  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.666682  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:37.666688  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:37.666748  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:37.701203  447486 cri.go:89] found id: ""
	I1030 19:49:37.701233  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.701242  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:37.701249  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:37.701324  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:37.735614  447486 cri.go:89] found id: ""
	I1030 19:49:37.735649  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.735661  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:37.735669  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:37.735738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:37.771381  447486 cri.go:89] found id: ""
	I1030 19:49:37.771418  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.771430  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:37.771439  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:37.771501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:37.807870  447486 cri.go:89] found id: ""
	I1030 19:49:37.807908  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.807922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:37.807935  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:37.807952  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:37.860334  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:37.860367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:37.874340  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:37.874371  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:37.952874  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:37.952903  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:37.952916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:38.045318  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:38.045356  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:40.591278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:40.604970  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:40.605050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:40.639839  447486 cri.go:89] found id: ""
	I1030 19:49:40.639869  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.639880  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:40.639889  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:40.639952  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:40.674046  447486 cri.go:89] found id: ""
	I1030 19:49:40.674077  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.674087  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:40.674093  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:40.674164  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:40.710759  447486 cri.go:89] found id: ""
	I1030 19:49:40.710794  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.710806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:40.710815  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:40.710880  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:40.752439  447486 cri.go:89] found id: ""
	I1030 19:49:40.752471  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.752484  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:40.752493  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:40.752548  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:40.787985  447486 cri.go:89] found id: ""
	I1030 19:49:40.788021  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.788034  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:40.788042  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:40.788102  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:40.829282  447486 cri.go:89] found id: ""
	I1030 19:49:40.829320  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.829333  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:40.829341  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:40.829409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:40.863911  447486 cri.go:89] found id: ""
	I1030 19:49:40.863944  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.863953  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:40.863959  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:40.864026  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:40.901239  447486 cri.go:89] found id: ""
	I1030 19:49:40.901275  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.901287  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:40.901300  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:40.901321  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:40.955283  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:40.955323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:40.968733  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:40.968766  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:41.040213  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:41.040242  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:41.040256  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:41.125992  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:41.126035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:43.667949  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:43.681633  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:43.681705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:43.725038  447486 cri.go:89] found id: ""
	I1030 19:49:43.725076  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.725085  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:43.725091  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:43.725149  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.761438  447486 cri.go:89] found id: ""
	I1030 19:49:43.761473  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.761486  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:43.761494  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:43.761566  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:43.795299  447486 cri.go:89] found id: ""
	I1030 19:49:43.795335  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.795347  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:43.795355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:43.795431  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:43.830545  447486 cri.go:89] found id: ""
	I1030 19:49:43.830582  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.830594  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:43.830601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:43.830670  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:43.867632  447486 cri.go:89] found id: ""
	I1030 19:49:43.867664  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.867676  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:43.867684  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:43.867753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:43.901315  447486 cri.go:89] found id: ""
	I1030 19:49:43.901346  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.901355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:43.901361  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:43.901412  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:43.934928  447486 cri.go:89] found id: ""
	I1030 19:49:43.934963  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.934975  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:43.934983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:43.935048  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:43.975407  447486 cri.go:89] found id: ""
	I1030 19:49:43.975441  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.975451  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:43.975472  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:43.975497  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:44.019281  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:44.019310  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:44.072363  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:44.072402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:44.085508  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:44.085538  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:44.159634  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:44.159666  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:44.159682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:46.739662  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:46.753190  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:46.753252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:46.790167  447486 cri.go:89] found id: ""
	I1030 19:49:46.790202  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.790211  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:46.790217  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:46.790272  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:46.828187  447486 cri.go:89] found id: ""
	I1030 19:49:46.828221  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.828230  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:46.828237  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:46.828305  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:46.865499  447486 cri.go:89] found id: ""
	I1030 19:49:46.865539  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.865551  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:46.865559  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:46.865612  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:46.899591  447486 cri.go:89] found id: ""
	I1030 19:49:46.899616  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.899625  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:46.899632  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:46.899681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:46.934818  447486 cri.go:89] found id: ""
	I1030 19:49:46.934850  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.934860  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:46.934868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:46.934933  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:46.971298  447486 cri.go:89] found id: ""
	I1030 19:49:46.971328  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.971340  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:46.971349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:46.971418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:47.010783  447486 cri.go:89] found id: ""
	I1030 19:49:47.010814  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.010825  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:47.010832  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:47.010896  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:47.044343  447486 cri.go:89] found id: ""
	I1030 19:49:47.044380  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.044392  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:47.044405  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:47.044421  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:47.094425  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:47.094459  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:47.110339  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:47.110368  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:47.183262  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:47.183290  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:47.183305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:47.262611  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:47.262651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:49.808195  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:49.821889  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:49.821963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:49.857296  447486 cri.go:89] found id: ""
	I1030 19:49:49.857339  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.857351  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:49.857359  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:49.857413  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:49.892614  447486 cri.go:89] found id: ""
	I1030 19:49:49.892648  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.892660  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:49.892668  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:49.892732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:49.929835  447486 cri.go:89] found id: ""
	I1030 19:49:49.929862  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.929871  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:49.929878  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:49.929940  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:49.965341  447486 cri.go:89] found id: ""
	I1030 19:49:49.965371  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.965379  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:49.965392  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:49.965449  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:50.000134  447486 cri.go:89] found id: ""
	I1030 19:49:50.000165  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.000177  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:50.000188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:50.000259  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:50.033848  447486 cri.go:89] found id: ""
	I1030 19:49:50.033876  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.033885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:50.033891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:50.033943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:50.073315  447486 cri.go:89] found id: ""
	I1030 19:49:50.073344  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.073354  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:50.073360  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:50.073421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:50.114232  447486 cri.go:89] found id: ""
	I1030 19:49:50.114266  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.114277  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:50.114290  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:50.114311  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:50.185407  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:50.185434  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:50.185448  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:50.270447  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:50.270494  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:50.308825  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:50.308855  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:50.363376  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:50.363417  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:52.878475  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:52.892013  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:52.892088  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:52.928085  447486 cri.go:89] found id: ""
	I1030 19:49:52.928117  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.928126  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:52.928132  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:52.928185  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:52.963377  447486 cri.go:89] found id: ""
	I1030 19:49:52.963413  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.963426  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:52.963434  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:52.963493  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:53.000799  447486 cri.go:89] found id: ""
	I1030 19:49:53.000825  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.000834  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:53.000840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:53.000912  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:53.037429  447486 cri.go:89] found id: ""
	I1030 19:49:53.037463  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.037472  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:53.037478  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:53.037534  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:53.072392  447486 cri.go:89] found id: ""
	I1030 19:49:53.072425  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.072433  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:53.072446  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:53.072520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:53.108925  447486 cri.go:89] found id: ""
	I1030 19:49:53.108957  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.108970  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:53.108978  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:53.109050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:53.145409  447486 cri.go:89] found id: ""
	I1030 19:49:53.145445  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.145457  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:53.145466  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:53.145536  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:53.180756  447486 cri.go:89] found id: ""
	I1030 19:49:53.180784  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.180793  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:53.180803  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:53.180817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:53.234960  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:53.235010  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:53.249224  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:53.249255  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:53.313223  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:53.313245  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:53.313264  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:53.399715  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:53.399758  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.944332  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:55.961546  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:55.961616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:56.020603  447486 cri.go:89] found id: ""
	I1030 19:49:56.020634  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.020647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:56.020654  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:56.020725  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:56.065134  447486 cri.go:89] found id: ""
	I1030 19:49:56.065162  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.065170  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:56.065176  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:56.065239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:56.101358  447486 cri.go:89] found id: ""
	I1030 19:49:56.101386  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.101396  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:56.101405  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:56.101473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:56.135762  447486 cri.go:89] found id: ""
	I1030 19:49:56.135795  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.135805  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:56.135811  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:56.135863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:56.171336  447486 cri.go:89] found id: ""
	I1030 19:49:56.171371  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.171383  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:56.171391  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:56.171461  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:56.205643  447486 cri.go:89] found id: ""
	I1030 19:49:56.205674  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.205685  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:56.205693  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:56.205759  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:56.240853  447486 cri.go:89] found id: ""
	I1030 19:49:56.240885  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.240894  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:56.240901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:56.240973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:56.276577  447486 cri.go:89] found id: ""
	I1030 19:49:56.276612  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.276623  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:56.276636  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:56.276651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:56.328180  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:56.328220  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:56.341895  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:56.341923  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:56.414492  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:56.414523  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:56.414540  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:56.498439  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:56.498498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:59.039071  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.053648  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.053722  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.097620  447486 cri.go:89] found id: ""
	I1030 19:49:59.097650  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.097661  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:59.097669  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.097738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.139136  447486 cri.go:89] found id: ""
	I1030 19:49:59.139176  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.139188  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:59.139199  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.139270  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.180322  447486 cri.go:89] found id: ""
	I1030 19:49:59.180361  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.180371  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:59.180384  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.180453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.217374  447486 cri.go:89] found id: ""
	I1030 19:49:59.217422  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.217434  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:59.217443  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.217498  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.257857  447486 cri.go:89] found id: ""
	I1030 19:49:59.257884  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.257894  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:59.257901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.257968  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.297679  447486 cri.go:89] found id: ""
	I1030 19:49:59.297713  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.297724  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:59.297733  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.297795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.341469  447486 cri.go:89] found id: ""
	I1030 19:49:59.341499  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.341509  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.341517  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:59.341587  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:59.381677  447486 cri.go:89] found id: ""
	I1030 19:49:59.381704  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.381713  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:59.381723  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.381735  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.441396  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.441428  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.457105  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:59.457139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:59.532023  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:59.532051  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.532064  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:59.621685  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:59.621720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:02.170623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:02.184885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.184975  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:02.223811  447486 cri.go:89] found id: ""
	I1030 19:50:02.223841  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.223849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:02.223856  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:02.223908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:02.260454  447486 cri.go:89] found id: ""
	I1030 19:50:02.260481  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.260491  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:02.260497  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:02.260554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:02.296542  447486 cri.go:89] found id: ""
	I1030 19:50:02.296569  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.296577  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:02.296583  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:02.296631  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:02.332168  447486 cri.go:89] found id: ""
	I1030 19:50:02.332199  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.332211  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:02.332219  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:02.332287  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:02.366539  447486 cri.go:89] found id: ""
	I1030 19:50:02.366575  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.366586  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:02.366595  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:02.366659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:02.401859  447486 cri.go:89] found id: ""
	I1030 19:50:02.401894  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.401915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:02.401923  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:02.401991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:02.446061  447486 cri.go:89] found id: ""
	I1030 19:50:02.446097  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.446108  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:02.446116  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:02.446181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:02.488233  447486 cri.go:89] found id: ""
	I1030 19:50:02.488257  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.488265  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:02.488274  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:02.488294  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:02.544517  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:02.544554  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:02.558143  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:02.558179  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:02.628679  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:02.628706  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:02.628723  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:02.710246  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:02.710293  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.254846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:05.269536  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:05.269599  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:05.303724  447486 cri.go:89] found id: ""
	I1030 19:50:05.303753  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.303761  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:05.303767  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:05.303819  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:05.339268  447486 cri.go:89] found id: ""
	I1030 19:50:05.339301  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.339322  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:05.339330  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:05.339405  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:05.375892  447486 cri.go:89] found id: ""
	I1030 19:50:05.375923  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.375930  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:05.375936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:05.375988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:05.413197  447486 cri.go:89] found id: ""
	I1030 19:50:05.413232  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.413243  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:05.413252  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:05.413329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:05.452095  447486 cri.go:89] found id: ""
	I1030 19:50:05.452122  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.452130  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:05.452137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:05.452193  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:05.490694  447486 cri.go:89] found id: ""
	I1030 19:50:05.490731  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.490744  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:05.490753  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:05.490808  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:05.523961  447486 cri.go:89] found id: ""
	I1030 19:50:05.523992  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.524001  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:05.524008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:05.524060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:05.558631  447486 cri.go:89] found id: ""
	I1030 19:50:05.558664  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.558673  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:05.558684  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:05.558699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.596929  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:05.596958  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:05.647294  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:05.647332  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:05.661349  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:05.661377  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:05.730268  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:05.730299  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:05.730323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.312167  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:08.327121  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:08.327206  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:08.364871  447486 cri.go:89] found id: ""
	I1030 19:50:08.364905  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.364916  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:08.364924  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:08.364982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:08.399179  447486 cri.go:89] found id: ""
	I1030 19:50:08.399215  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.399225  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:08.399231  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:08.399286  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:08.434308  447486 cri.go:89] found id: ""
	I1030 19:50:08.434340  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.434350  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:08.434356  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:08.434409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:08.477152  447486 cri.go:89] found id: ""
	I1030 19:50:08.477184  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.477193  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:08.477204  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:08.477274  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:08.513678  447486 cri.go:89] found id: ""
	I1030 19:50:08.513706  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.513716  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:08.513725  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:08.513789  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:08.551427  447486 cri.go:89] found id: ""
	I1030 19:50:08.551459  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.551478  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:08.551485  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:08.551550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:08.584224  447486 cri.go:89] found id: ""
	I1030 19:50:08.584260  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.584272  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:08.584282  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:08.584351  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:08.617603  447486 cri.go:89] found id: ""
	I1030 19:50:08.617638  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.617649  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:08.617660  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:08.617674  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:08.694201  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:08.694229  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:08.694247  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.775457  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:08.775500  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:08.816452  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:08.816496  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:08.868077  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:08.868114  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.383130  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:11.397672  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:11.397758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:11.431923  447486 cri.go:89] found id: ""
	I1030 19:50:11.431959  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.431971  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:11.431980  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:11.432050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:11.466959  447486 cri.go:89] found id: ""
	I1030 19:50:11.466996  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.467009  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:11.467018  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:11.467093  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:11.506399  447486 cri.go:89] found id: ""
	I1030 19:50:11.506425  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.506437  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:11.506444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:11.506529  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:11.538606  447486 cri.go:89] found id: ""
	I1030 19:50:11.538635  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.538643  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:11.538649  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:11.538700  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:11.573265  447486 cri.go:89] found id: ""
	I1030 19:50:11.573296  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.573304  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:11.573310  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:11.573364  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:11.608522  447486 cri.go:89] found id: ""
	I1030 19:50:11.608549  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.608558  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:11.608569  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:11.608629  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:11.639758  447486 cri.go:89] found id: ""
	I1030 19:50:11.639784  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.639792  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:11.639797  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:11.639846  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:11.673381  447486 cri.go:89] found id: ""
	I1030 19:50:11.673414  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.673426  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:11.673439  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:11.673454  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:11.727368  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:11.727414  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.741267  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:11.741301  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:11.808126  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:11.808158  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:11.808174  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:11.888676  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:11.888713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.431637  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:14.445315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:14.445392  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:14.482059  447486 cri.go:89] found id: ""
	I1030 19:50:14.482097  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.482110  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:14.482118  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:14.482186  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:14.520802  447486 cri.go:89] found id: ""
	I1030 19:50:14.520834  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.520843  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:14.520849  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:14.520900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:14.559965  447486 cri.go:89] found id: ""
	I1030 19:50:14.559996  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.560006  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:14.560012  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:14.560062  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:14.601831  447486 cri.go:89] found id: ""
	I1030 19:50:14.601865  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.601875  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:14.601881  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:14.601932  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:14.635307  447486 cri.go:89] found id: ""
	I1030 19:50:14.635339  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.635348  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:14.635355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:14.635418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:14.668618  447486 cri.go:89] found id: ""
	I1030 19:50:14.668648  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.668657  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:14.668664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:14.668726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:14.702597  447486 cri.go:89] found id: ""
	I1030 19:50:14.702633  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.702644  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:14.702653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:14.702715  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:14.736860  447486 cri.go:89] found id: ""
	I1030 19:50:14.736899  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.736911  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:14.736925  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:14.736942  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:14.822015  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:14.822060  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.860153  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:14.860195  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:14.912230  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:14.912269  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:14.927032  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:14.927067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:14.994401  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:17.494865  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:17.509934  447486 kubeadm.go:597] duration metric: took 4m3.074434895s to restartPrimaryControlPlane
	W1030 19:50:17.510016  447486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:17.510051  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:18.496415  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:18.512328  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:18.522293  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:18.532752  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:18.532772  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:18.532823  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:18.542501  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:18.542560  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:18.552660  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:18.562585  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:18.562649  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:18.572321  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.581633  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:18.581689  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.592770  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:18.602414  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:18.602477  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:18.612334  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:18.844753  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:52:15.582907  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:52:15.583009  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:52:15.584345  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:15.584419  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:15.584522  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:15.584659  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:15.584763  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:15.584827  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:15.586931  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:15.587016  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:15.587074  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:15.587145  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:15.587198  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:15.587271  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:15.587339  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:15.587402  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:15.587455  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:15.587517  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:15.587577  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:15.587608  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:15.587682  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:15.587759  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:15.587846  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:15.587924  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:15.587988  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:15.588076  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:15.588148  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:15.588180  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:15.588267  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:15.589722  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:15.589834  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:15.589932  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:15.590014  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:15.590128  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:15.590285  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:15.590336  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:15.590388  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590560  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590642  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590842  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590946  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591155  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591253  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591513  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591609  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591841  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591855  447486 kubeadm.go:310] 
	I1030 19:52:15.591900  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:52:15.591956  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:52:15.591966  447486 kubeadm.go:310] 
	I1030 19:52:15.592008  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:52:15.592051  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:52:15.592192  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:52:15.592204  447486 kubeadm.go:310] 
	I1030 19:52:15.592318  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:52:15.592360  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:52:15.592391  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:52:15.592397  447486 kubeadm.go:310] 
	I1030 19:52:15.592511  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:52:15.592592  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:52:15.592600  447486 kubeadm.go:310] 
	I1030 19:52:15.592733  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:52:15.592850  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:52:15.592959  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:52:15.593059  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:52:15.593138  447486 kubeadm.go:310] 
	W1030 19:52:15.593236  447486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:52:15.593289  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:52:16.049810  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:52:16.065820  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:52:16.076166  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:52:16.076192  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:52:16.076241  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:52:16.085309  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:52:16.085380  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:52:16.094868  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:52:16.104343  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:52:16.104395  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:52:16.113939  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.122836  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:52:16.122885  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.132083  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:52:16.141441  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:52:16.141487  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:52:16.150710  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:52:16.222070  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:16.222183  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:16.366061  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:16.366194  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:16.366352  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:16.541086  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:16.543200  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:16.543303  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:16.543398  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:16.543523  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:16.543625  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:16.543749  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:16.543848  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:16.543942  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:16.544020  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:16.544096  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:16.544193  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:16.544252  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:16.544343  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:16.637454  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:16.829430  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:16.985259  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:17.072312  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:17.092511  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:17.093595  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:17.093654  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:17.228039  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:17.229647  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:17.229766  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:17.237333  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:17.239644  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:17.239774  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:17.241037  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:57.243167  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:57.243769  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:57.244072  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:02.244240  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:02.244563  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:12.244991  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:12.245293  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:32.246428  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:32.246697  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.247834  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:54:12.248150  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.248173  447486 kubeadm.go:310] 
	I1030 19:54:12.248226  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:54:12.248308  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:54:12.248336  447486 kubeadm.go:310] 
	I1030 19:54:12.248386  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:54:12.248449  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:54:12.248598  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:54:12.248609  447486 kubeadm.go:310] 
	I1030 19:54:12.248747  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:54:12.248811  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:54:12.248867  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:54:12.248876  447486 kubeadm.go:310] 
	I1030 19:54:12.249013  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:54:12.249111  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:54:12.249129  447486 kubeadm.go:310] 
	I1030 19:54:12.249280  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:54:12.249447  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:54:12.249564  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:54:12.249662  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:54:12.249708  447486 kubeadm.go:310] 
	I1030 19:54:12.249878  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:54:12.250015  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:54:12.250208  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:54:12.250221  447486 kubeadm.go:394] duration metric: took 7m57.874179721s to StartCluster
	I1030 19:54:12.250311  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:54:12.250399  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:54:12.292692  447486 cri.go:89] found id: ""
	I1030 19:54:12.292749  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.292760  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:54:12.292770  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:54:12.292840  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:54:12.329792  447486 cri.go:89] found id: ""
	I1030 19:54:12.329825  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.329835  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:54:12.329843  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:54:12.329905  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:54:12.364661  447486 cri.go:89] found id: ""
	I1030 19:54:12.364693  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.364702  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:54:12.364709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:54:12.364764  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:54:12.400842  447486 cri.go:89] found id: ""
	I1030 19:54:12.400870  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.400878  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:54:12.400885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:54:12.400943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:54:12.440135  447486 cri.go:89] found id: ""
	I1030 19:54:12.440164  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.440172  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:54:12.440178  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:54:12.440228  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:54:12.476365  447486 cri.go:89] found id: ""
	I1030 19:54:12.476403  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.476416  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:54:12.476425  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:54:12.476503  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:54:12.519669  447486 cri.go:89] found id: ""
	I1030 19:54:12.519702  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.519715  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:54:12.519724  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:54:12.519791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:54:12.554180  447486 cri.go:89] found id: ""
	I1030 19:54:12.554218  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.554230  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:54:12.554244  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:54:12.554261  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:54:12.669617  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:54:12.669660  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:54:12.708361  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:54:12.708392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:54:12.763103  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:54:12.763145  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:54:12.778676  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:54:12.778712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:54:12.865694  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1030 19:54:12.865732  447486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:54:12.865797  447486 out.go:270] * 
	* 
	W1030 19:54:12.865908  447486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.865929  447486 out.go:270] * 
	* 
	W1030 19:54:12.867124  447486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:54:12.871111  447486 out.go:201] 
	W1030 19:54:12.872534  447486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.872591  447486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:54:12.872616  447486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:54:12.874145  447486 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-516975 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (241.749014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-516975 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-516975 logs -n 25: (1.582389666s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-534248 sudo cat                              | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo find                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo crio                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-534248                                       | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:42:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:42:11.799298  447486 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:42:11.799434  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799444  447486 out.go:358] Setting ErrFile to fd 2...
	I1030 19:42:11.799448  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799628  447486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:42:11.800193  447486 out.go:352] Setting JSON to false
	I1030 19:42:11.801205  447486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12275,"bootTime":1730305057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:42:11.801318  447486 start.go:139] virtualization: kvm guest
	I1030 19:42:11.803677  447486 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:42:11.805274  447486 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:42:11.805300  447486 notify.go:220] Checking for updates...
	I1030 19:42:11.808043  447486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:42:11.809440  447486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:42:11.810604  447486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:42:11.811774  447486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:42:11.812958  447486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:42:11.814552  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:42:11.814994  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.815077  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.830315  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1030 19:42:11.830795  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.831345  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.831365  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.831692  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.831869  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.833718  447486 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:42:11.835019  447486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:42:11.835371  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.835416  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.850097  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1030 19:42:11.850532  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.850964  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.850978  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.851321  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.851541  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.886920  447486 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:42:11.888376  447486 start.go:297] selected driver: kvm2
	I1030 19:42:11.888392  447486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.888538  447486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:42:11.889472  447486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.889560  447486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:42:11.904007  447486 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:42:11.904405  447486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:42:11.904443  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:42:11.904494  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:42:11.904549  447486 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.904661  447486 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.907302  447486 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:42:10.622770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:11.908430  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:42:11.908474  447486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:42:11.908485  447486 cache.go:56] Caching tarball of preloaded images
	I1030 19:42:11.908564  447486 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:42:11.908575  447486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:42:11.908666  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:42:11.908832  447486 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:42:16.702732  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:19.774825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:25.854777  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:28.926846  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:35.006934  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:38.078752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:44.158848  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:47.230843  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:53.310763  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:56.382772  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:02.462818  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:05.534754  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:11.614801  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:14.686762  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:20.766767  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:23.838853  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:29.918782  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:32.990752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:39.070771  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:42.142716  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:48.222814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:51.294775  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:57.374780  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:00.446825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:06.526810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:09.598813  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:15.678770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:18.750751  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:24.830814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:27.902810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:33.982759  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:37.054791  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:43.134706  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:46.206802  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:52.286830  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:55.358809  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:01.438753  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:04.510854  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:07.515699  446887 start.go:364] duration metric: took 4m29.000646378s to acquireMachinesLock for "default-k8s-diff-port-768989"
	I1030 19:45:07.515764  446887 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:07.515773  446887 fix.go:54] fixHost starting: 
	I1030 19:45:07.516191  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:07.516238  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:07.532374  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I1030 19:45:07.532907  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:07.533433  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:07.533459  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:07.533790  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:07.534016  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:07.534220  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:07.535802  446887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-768989: state=Stopped err=<nil>
	I1030 19:45:07.535842  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	W1030 19:45:07.536016  446887 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:07.537809  446887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-768989" ...
	I1030 19:45:07.539184  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Start
	I1030 19:45:07.539361  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring networks are active...
	I1030 19:45:07.540025  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network default is active
	I1030 19:45:07.540408  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network mk-default-k8s-diff-port-768989 is active
	I1030 19:45:07.540867  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Getting domain xml...
	I1030 19:45:07.541489  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Creating domain...
	I1030 19:45:07.512810  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:07.512848  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513191  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:45:07.513223  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513458  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:45:07.515538  446736 machine.go:96] duration metric: took 4m37.420773403s to provisionDockerMachine
	I1030 19:45:07.515594  446736 fix.go:56] duration metric: took 4m37.443968478s for fixHost
	I1030 19:45:07.515600  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 4m37.443992524s
	W1030 19:45:07.515625  446736 start.go:714] error starting host: provision: host is not running
	W1030 19:45:07.515753  446736 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1030 19:45:07.515763  446736 start.go:729] Will try again in 5 seconds ...
	I1030 19:45:08.756310  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting to get IP...
	I1030 19:45:08.757242  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757624  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757747  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.757629  448092 retry.go:31] will retry after 202.103853ms: waiting for machine to come up
	I1030 19:45:08.961147  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961660  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961685  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.961606  448092 retry.go:31] will retry after 243.456761ms: waiting for machine to come up
	I1030 19:45:09.207134  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207539  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207582  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.207493  448092 retry.go:31] will retry after 375.017051ms: waiting for machine to come up
	I1030 19:45:09.584058  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584428  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.584373  448092 retry.go:31] will retry after 552.476692ms: waiting for machine to come up
	I1030 19:45:10.137989  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138421  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.138358  448092 retry.go:31] will retry after 560.865483ms: waiting for machine to come up
	I1030 19:45:10.700603  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700968  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.700920  448092 retry.go:31] will retry after 680.400693ms: waiting for machine to come up
	I1030 19:45:11.382861  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383336  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383362  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:11.383274  448092 retry.go:31] will retry after 787.136113ms: waiting for machine to come up
	I1030 19:45:12.171550  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171910  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171938  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:12.171853  448092 retry.go:31] will retry after 1.176474969s: waiting for machine to come up
	I1030 19:45:13.349617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350080  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350114  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:13.350042  448092 retry.go:31] will retry after 1.211573437s: waiting for machine to come up
	I1030 19:45:12.517265  446736 start.go:360] acquireMachinesLock for no-preload-960512: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:45:14.563397  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563805  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:14.563749  448092 retry.go:31] will retry after 1.625938777s: waiting for machine to come up
	I1030 19:45:16.191798  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192226  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192255  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:16.192188  448092 retry.go:31] will retry after 2.442949682s: waiting for machine to come up
	I1030 19:45:18.636342  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636768  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636812  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:18.636748  448092 retry.go:31] will retry after 2.48415211s: waiting for machine to come up
	I1030 19:45:21.124407  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124892  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124919  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:21.124843  448092 retry.go:31] will retry after 3.392637796s: waiting for machine to come up
	I1030 19:45:25.815539  446965 start.go:364] duration metric: took 4m42.694254153s to acquireMachinesLock for "embed-certs-042402"
	I1030 19:45:25.815623  446965 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:25.815635  446965 fix.go:54] fixHost starting: 
	I1030 19:45:25.816068  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:25.816232  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:25.833218  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 19:45:25.833610  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:25.834159  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:45:25.834191  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:25.834567  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:25.834777  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:25.834920  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:45:25.836507  446965 fix.go:112] recreateIfNeeded on embed-certs-042402: state=Stopped err=<nil>
	I1030 19:45:25.836532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	W1030 19:45:25.836711  446965 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:25.839078  446965 out.go:177] * Restarting existing kvm2 VM for "embed-certs-042402" ...
	I1030 19:45:24.519725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520072  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Found IP for machine: 192.168.39.92
	I1030 19:45:24.520091  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserving static IP address...
	I1030 19:45:24.520113  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has current primary IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520507  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.520521  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserved static IP address: 192.168.39.92
	I1030 19:45:24.520535  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | skip adding static IP to network mk-default-k8s-diff-port-768989 - found existing host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"}
	I1030 19:45:24.520545  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for SSH to be available...
	I1030 19:45:24.520560  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Getting to WaitForSSH function...
	I1030 19:45:24.522776  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523095  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.523127  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523209  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH client type: external
	I1030 19:45:24.523229  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa (-rw-------)
	I1030 19:45:24.523262  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:24.523283  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | About to run SSH command:
	I1030 19:45:24.523298  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | exit 0
	I1030 19:45:24.646297  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:24.646826  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetConfigRaw
	I1030 19:45:24.647589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:24.650093  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650532  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.650564  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650790  446887 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/config.json ...
	I1030 19:45:24.650984  446887 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:24.651005  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:24.651232  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.653396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653751  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.653781  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.654084  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654263  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.654677  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.654922  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.654935  446887 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:24.762586  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:24.762621  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.762898  446887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-768989"
	I1030 19:45:24.762936  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.763250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.765937  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766265  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.766289  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766398  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.766599  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766762  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766920  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.767087  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.767257  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.767269  446887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-768989 && echo "default-k8s-diff-port-768989" | sudo tee /etc/hostname
	I1030 19:45:24.888742  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-768989
	
	I1030 19:45:24.888771  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.891326  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891638  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.891691  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891804  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.892018  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892154  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892281  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.892498  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.892692  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.892716  446887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-768989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-768989/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-768989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:25.012173  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:25.012214  446887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:25.012240  446887 buildroot.go:174] setting up certificates
	I1030 19:45:25.012250  446887 provision.go:84] configureAuth start
	I1030 19:45:25.012280  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:25.012598  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.015106  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015430  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.015458  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.017810  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018099  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.018136  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018230  446887 provision.go:143] copyHostCerts
	I1030 19:45:25.018322  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:25.018334  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:25.018401  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:25.018553  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:25.018566  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:25.018634  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:25.018716  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:25.018724  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:25.018748  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:25.018798  446887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-768989 san=[127.0.0.1 192.168.39.92 default-k8s-diff-port-768989 localhost minikube]
	I1030 19:45:25.188186  446887 provision.go:177] copyRemoteCerts
	I1030 19:45:25.188246  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:25.188285  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.190995  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.191344  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191525  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.191718  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.191875  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.191991  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.277273  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1030 19:45:25.300302  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:45:25.322919  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:25.347214  446887 provision.go:87] duration metric: took 334.947897ms to configureAuth
	I1030 19:45:25.347246  446887 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:25.347432  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:25.347510  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.349988  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350294  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.350324  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350500  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.350704  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.350836  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.351015  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.351210  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.351421  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.351436  446887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:25.576481  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:25.576509  446887 machine.go:96] duration metric: took 925.509257ms to provisionDockerMachine
	I1030 19:45:25.576525  446887 start.go:293] postStartSetup for "default-k8s-diff-port-768989" (driver="kvm2")
	I1030 19:45:25.576562  446887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:25.576589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.576923  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:25.576951  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.579498  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579825  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.579841  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579980  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.580151  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.580320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.580453  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.665032  446887 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:25.669402  446887 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:25.669430  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:25.669500  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:25.669573  446887 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:25.669665  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:25.679070  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:25.703131  446887 start.go:296] duration metric: took 126.586543ms for postStartSetup
	I1030 19:45:25.703194  446887 fix.go:56] duration metric: took 18.187420989s for fixHost
	I1030 19:45:25.703217  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.705911  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706365  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.706396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706609  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.706800  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.706944  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.707052  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.707188  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.707428  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.707443  446887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:25.815370  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317525.786848764
	
	I1030 19:45:25.815406  446887 fix.go:216] guest clock: 1730317525.786848764
	I1030 19:45:25.815414  446887 fix.go:229] Guest: 2024-10-30 19:45:25.786848764 +0000 UTC Remote: 2024-10-30 19:45:25.703198163 +0000 UTC m=+287.327380555 (delta=83.650601ms)
	I1030 19:45:25.815439  446887 fix.go:200] guest clock delta is within tolerance: 83.650601ms
	I1030 19:45:25.815445  446887 start.go:83] releasing machines lock for "default-k8s-diff-port-768989", held for 18.299702226s
	I1030 19:45:25.815467  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.815737  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.818508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818851  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.818889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818987  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819477  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819671  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819808  446887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:25.819862  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.819900  446887 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:25.819930  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.822372  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.822754  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822774  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822887  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823109  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.823168  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.823330  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823429  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823506  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.823605  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823758  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823880  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.903488  446887 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:25.931046  446887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:26.077178  446887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:26.084282  446887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:26.084358  446887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:26.100869  446887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:26.100893  446887 start.go:495] detecting cgroup driver to use...
	I1030 19:45:26.100984  446887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:26.117006  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:26.130102  446887 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:26.130184  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:26.148540  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:26.163003  446887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:26.286433  446887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:26.444862  446887 docker.go:233] disabling docker service ...
	I1030 19:45:26.444931  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:26.460606  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:26.477159  446887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:26.600212  446887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:26.725587  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:26.741934  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:26.761815  446887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:26.761872  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.772368  446887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:26.772422  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.784279  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.795403  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.806323  446887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:26.821929  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.836574  446887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.857305  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.868135  446887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:26.878058  446887 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:26.878138  446887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:26.891979  446887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:26.902181  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:27.021858  446887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:27.118890  446887 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:27.118985  446887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:27.125407  446887 start.go:563] Will wait 60s for crictl version
	I1030 19:45:27.125472  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:45:27.129507  446887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:27.176630  446887 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:27.176739  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.205818  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.236431  446887 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:25.840689  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Start
	I1030 19:45:25.840860  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring networks are active...
	I1030 19:45:25.841604  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network default is active
	I1030 19:45:25.841928  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network mk-embed-certs-042402 is active
	I1030 19:45:25.842443  446965 main.go:141] libmachine: (embed-certs-042402) Getting domain xml...
	I1030 19:45:25.843267  446965 main.go:141] libmachine: (embed-certs-042402) Creating domain...
	I1030 19:45:27.094878  446965 main.go:141] libmachine: (embed-certs-042402) Waiting to get IP...
	I1030 19:45:27.095705  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.096101  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.096166  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.096079  448226 retry.go:31] will retry after 190.217394ms: waiting for machine to come up
	I1030 19:45:27.287473  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.287940  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.287966  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.287899  448226 retry.go:31] will retry after 365.943545ms: waiting for machine to come up
	I1030 19:45:27.655952  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.656374  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.656425  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.656343  448226 retry.go:31] will retry after 345.369581ms: waiting for machine to come up
	I1030 19:45:28.003856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.004367  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.004398  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.004319  448226 retry.go:31] will retry after 609.6218ms: waiting for machine to come up
	I1030 19:45:27.237629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:27.240387  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240733  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:27.240779  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240995  446887 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:27.245263  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:27.261305  446887 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:27.261440  446887 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:27.261489  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:27.301593  446887 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:27.301650  446887 ssh_runner.go:195] Run: which lz4
	I1030 19:45:27.305829  446887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:27.310384  446887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:27.310413  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:28.615219  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.615769  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.615795  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.615716  448226 retry.go:31] will retry after 672.090411ms: waiting for machine to come up
	I1030 19:45:29.289646  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:29.290179  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:29.290216  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:29.290105  448226 retry.go:31] will retry after 865.239242ms: waiting for machine to come up
	I1030 19:45:30.157223  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.157650  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.157679  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.157616  448226 retry.go:31] will retry after 833.557181ms: waiting for machine to come up
	I1030 19:45:30.993139  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.993663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.993720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.993625  448226 retry.go:31] will retry after 989.333841ms: waiting for machine to come up
	I1030 19:45:31.983978  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:31.984498  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:31.984546  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:31.984443  448226 retry.go:31] will retry after 1.534311856s: waiting for machine to come up
	I1030 19:45:28.730765  446887 crio.go:462] duration metric: took 1.424975563s to copy over tarball
	I1030 19:45:28.730868  446887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:30.907494  446887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1765829s)
	I1030 19:45:30.907536  446887 crio.go:469] duration metric: took 2.176738354s to extract the tarball
	I1030 19:45:30.907546  446887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:30.944242  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:30.986812  446887 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:30.986839  446887 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:30.986872  446887 kubeadm.go:934] updating node { 192.168.39.92 8444 v1.31.2 crio true true} ...
	I1030 19:45:30.987042  446887 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-768989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:30.987145  446887 ssh_runner.go:195] Run: crio config
	I1030 19:45:31.037466  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:31.037496  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:31.037511  446887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:31.037544  446887 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-768989 NodeName:default-k8s-diff-port-768989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:31.037735  446887 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-768989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:31.037815  446887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:31.047808  446887 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:31.047885  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:31.057074  446887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1030 19:45:31.073022  446887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:31.088919  446887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1030 19:45:31.105357  446887 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:31.109207  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:31.121329  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:31.234078  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:31.251028  446887 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989 for IP: 192.168.39.92
	I1030 19:45:31.251057  446887 certs.go:194] generating shared ca certs ...
	I1030 19:45:31.251080  446887 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:31.251287  446887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:31.251342  446887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:31.251354  446887 certs.go:256] generating profile certs ...
	I1030 19:45:31.251480  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/client.key
	I1030 19:45:31.251567  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key.eeeafde8
	I1030 19:45:31.251620  446887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key
	I1030 19:45:31.251788  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:31.251834  446887 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:31.251848  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:31.251888  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:31.251931  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:31.251963  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:31.252024  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:31.253127  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:31.293822  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:31.334804  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:31.366955  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:31.396042  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 19:45:31.428748  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1030 19:45:31.452866  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:31.476407  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:45:31.500375  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:31.523909  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:31.547532  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:31.571163  446887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:31.587969  446887 ssh_runner.go:195] Run: openssl version
	I1030 19:45:31.593866  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:31.604538  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609348  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609419  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.615446  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:31.626640  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:31.640948  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646702  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646751  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.654365  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:31.668538  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:31.679201  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683631  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683693  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.689362  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:31.699804  446887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:31.704445  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:31.710558  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:31.718563  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:31.724745  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:31.731125  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:31.736828  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:31.742434  446887 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:31.742604  446887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:31.742654  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.779319  446887 cri.go:89] found id: ""
	I1030 19:45:31.779416  446887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:31.789556  446887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:31.789576  446887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:31.789622  446887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:31.799817  446887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:31.800824  446887 kubeconfig.go:125] found "default-k8s-diff-port-768989" server: "https://192.168.39.92:8444"
	I1030 19:45:31.803207  446887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:31.812876  446887 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I1030 19:45:31.812909  446887 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:31.812924  446887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:31.812984  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.858070  446887 cri.go:89] found id: ""
	I1030 19:45:31.858174  446887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:31.874923  446887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:31.885243  446887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:31.885275  446887 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:31.885321  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1030 19:45:31.894394  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:31.894453  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:31.903760  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1030 19:45:31.912344  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:31.912410  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:31.921458  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.930426  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:31.930499  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.940008  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1030 19:45:31.949578  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:31.949645  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:31.959022  446887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:31.968457  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.069017  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.985574  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.191887  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.273266  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.400584  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:33.400686  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:33.520596  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:33.521020  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:33.521041  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:33.520992  448226 retry.go:31] will retry after 1.787777673s: waiting for machine to come up
	I1030 19:45:35.310399  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:35.310878  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:35.310906  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:35.310833  448226 retry.go:31] will retry after 2.264310439s: waiting for machine to come up
	I1030 19:45:37.577787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:37.578276  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:37.578310  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:37.578214  448226 retry.go:31] will retry after 2.384410161s: waiting for machine to come up
	I1030 19:45:33.901397  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.400978  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.901476  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.401772  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.420824  446887 api_server.go:72] duration metric: took 2.020238714s to wait for apiserver process to appear ...
	I1030 19:45:35.420862  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:35.420889  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.795897  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.795931  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.795948  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.848032  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.848069  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.921286  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.930778  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:37.930822  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.421866  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.429247  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.429291  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.921655  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.928650  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.928680  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:39.421195  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:39.425565  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:45:39.433509  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:39.433543  446887 api_server.go:131] duration metric: took 4.01267362s to wait for apiserver health ...
	I1030 19:45:39.433555  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:39.433564  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:39.435645  446887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:39.437042  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:39.456091  446887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:39.477617  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:39.485998  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:39.486041  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:39.486051  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:39.486061  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:39.486071  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:39.486082  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:45:39.486087  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:39.486092  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:39.486095  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:45:39.486101  446887 system_pods.go:74] duration metric: took 8.467537ms to wait for pod list to return data ...
	I1030 19:45:39.486110  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:39.490771  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:39.490793  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:39.490805  446887 node_conditions.go:105] duration metric: took 4.690594ms to run NodePressure ...
	I1030 19:45:39.490821  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:39.752369  446887 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757080  446887 kubeadm.go:739] kubelet initialised
	I1030 19:45:39.757105  446887 kubeadm.go:740] duration metric: took 4.707251ms waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757114  446887 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:39.762374  446887 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.766904  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766934  446887 pod_ready.go:82] duration metric: took 4.529466ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.766948  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766958  446887 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.771681  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771705  446887 pod_ready.go:82] duration metric: took 4.73772ms for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.771715  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771722  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.776170  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776199  446887 pod_ready.go:82] duration metric: took 4.470353ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.776211  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776220  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.881949  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.881988  446887 pod_ready.go:82] duration metric: took 105.756203ms for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.882027  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.882042  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.281665  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281703  446887 pod_ready.go:82] duration metric: took 399.651747ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.281716  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281725  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.680827  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680861  446887 pod_ready.go:82] duration metric: took 399.128654ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.680873  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680883  446887 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:41.086176  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086203  446887 pod_ready.go:82] duration metric: took 405.311117ms for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:41.086216  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086225  446887 pod_ready.go:39] duration metric: took 1.32910228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:41.086246  446887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:45:41.100836  446887 ops.go:34] apiserver oom_adj: -16
	I1030 19:45:41.100871  446887 kubeadm.go:597] duration metric: took 9.31128777s to restartPrimaryControlPlane
	I1030 19:45:41.100887  446887 kubeadm.go:394] duration metric: took 9.358460424s to StartCluster
	I1030 19:45:41.100915  446887 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.101046  446887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:45:41.103578  446887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.103910  446887 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:45:41.103995  446887 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:45:41.104111  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:41.104131  446887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104151  446887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104159  446887 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:45:41.104175  446887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104198  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104207  446887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104218  446887 addons.go:243] addon metrics-server should already be in state true
	I1030 19:45:41.104153  446887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104255  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104258  446887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-768989"
	I1030 19:45:41.104672  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104683  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104694  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104718  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104728  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104730  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.105606  446887 out.go:177] * Verifying Kubernetes components...
	I1030 19:45:41.107136  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:41.121415  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I1030 19:45:41.122053  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.122694  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.122721  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.123073  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.123682  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.123733  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.125497  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1030 19:45:41.125546  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I1030 19:45:41.125878  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.125962  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.126425  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126445  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126465  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126507  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126840  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.126897  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.127362  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.127392  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.127590  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.131397  446887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.131424  446887 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:45:41.131457  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.131834  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.131877  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.143183  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1030 19:45:41.143221  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I1030 19:45:41.143628  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.143765  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.144231  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144249  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144369  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144392  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144657  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144766  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144879  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.144926  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.146739  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.146913  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.148740  446887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:45:41.148794  446887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:45:41.149853  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1030 19:45:41.150250  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.150397  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:45:41.150435  446887 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:45:41.150462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150525  446887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.150545  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:45:41.150562  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150763  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.150781  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.151168  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.152135  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.152184  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.154133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154425  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154625  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.154654  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154811  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.154996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155033  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.155059  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.155145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.155310  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.155345  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155464  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155548  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.168971  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1030 19:45:41.169445  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.169946  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.169969  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.170335  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.170508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.172162  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.172378  446887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.172394  446887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:45:41.172410  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.175214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.175643  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175795  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.175978  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.176133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.176301  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.324093  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:41.381986  446887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:41.439497  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:45:41.439522  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:45:41.448751  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.486707  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:45:41.486736  446887 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:45:41.514478  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.514513  446887 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:45:41.546821  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.590509  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.879189  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879224  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879548  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:41.879597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879608  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.879622  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879632  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879868  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879886  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.889008  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.889024  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.889273  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.889290  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499223  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499621  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499632  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499689  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499969  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499984  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499996  446887 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-768989"
	I1030 19:45:42.598713  446887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008157275s)
	I1030 19:45:42.598770  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.598782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599088  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599109  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.599117  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.599143  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:42.599201  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599447  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599461  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.601840  446887 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1030 19:45:39.963885  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:39.964308  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:39.964346  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:39.964250  448226 retry.go:31] will retry after 4.32150593s: waiting for machine to come up
	I1030 19:45:42.603197  446887 addons.go:510] duration metric: took 1.499214294s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1030 19:45:43.386074  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:45.631177  447486 start.go:364] duration metric: took 3m33.722307877s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:45:45.631272  447486 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:45.631284  447486 fix.go:54] fixHost starting: 
	I1030 19:45:45.631708  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:45.631767  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:45.648654  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1030 19:45:45.649098  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:45.649552  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:45:45.649574  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:45.649848  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:45.650005  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:45:45.650153  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:45:45.651624  447486 fix.go:112] recreateIfNeeded on old-k8s-version-516975: state=Stopped err=<nil>
	I1030 19:45:45.651661  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	W1030 19:45:45.651805  447486 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:45.654065  447486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	I1030 19:45:45.655382  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .Start
	I1030 19:45:45.655554  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:45:45.656134  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:45:45.656518  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:45:45.656885  447486 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:45:45.657501  447486 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:45:44.289530  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289944  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has current primary IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289965  446965 main.go:141] libmachine: (embed-certs-042402) Found IP for machine: 192.168.61.235
	I1030 19:45:44.289978  446965 main.go:141] libmachine: (embed-certs-042402) Reserving static IP address...
	I1030 19:45:44.290419  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.290450  446965 main.go:141] libmachine: (embed-certs-042402) Reserved static IP address: 192.168.61.235
	I1030 19:45:44.290469  446965 main.go:141] libmachine: (embed-certs-042402) DBG | skip adding static IP to network mk-embed-certs-042402 - found existing host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"}
	I1030 19:45:44.290502  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Getting to WaitForSSH function...
	I1030 19:45:44.290519  446965 main.go:141] libmachine: (embed-certs-042402) Waiting for SSH to be available...
	I1030 19:45:44.292418  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292684  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.292727  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292750  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH client type: external
	I1030 19:45:44.292785  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa (-rw-------)
	I1030 19:45:44.292839  446965 main.go:141] libmachine: (embed-certs-042402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:44.292856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | About to run SSH command:
	I1030 19:45:44.292873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | exit 0
	I1030 19:45:44.414810  446965 main.go:141] libmachine: (embed-certs-042402) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:44.415211  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetConfigRaw
	I1030 19:45:44.416039  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.418830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419269  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.419303  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419529  446965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/config.json ...
	I1030 19:45:44.419832  446965 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:44.419859  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:44.420102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.422359  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422704  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.422729  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422878  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.423072  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423217  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423355  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.423493  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.423677  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.423685  446965 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:44.527214  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:44.527248  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527526  446965 buildroot.go:166] provisioning hostname "embed-certs-042402"
	I1030 19:45:44.527562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527793  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.530474  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.530830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.530856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.531041  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.531243  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531432  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531563  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.531736  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.531958  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.531979  446965 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-042402 && echo "embed-certs-042402" | sudo tee /etc/hostname
	I1030 19:45:44.656963  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-042402
	
	I1030 19:45:44.656996  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.659958  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660361  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.660397  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660643  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.660842  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661122  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.661295  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.661469  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.661484  446965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-042402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-042402/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-042402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:44.771688  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:44.771728  446965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:44.771755  446965 buildroot.go:174] setting up certificates
	I1030 19:45:44.771766  446965 provision.go:84] configureAuth start
	I1030 19:45:44.771780  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.772120  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.774838  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775271  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.775298  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775424  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.777432  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777765  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.777793  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777910  446965 provision.go:143] copyHostCerts
	I1030 19:45:44.777990  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:44.778006  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:44.778057  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:44.778147  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:44.778155  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:44.778174  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:44.778229  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:44.778237  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:44.778253  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:44.778360  446965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.embed-certs-042402 san=[127.0.0.1 192.168.61.235 embed-certs-042402 localhost minikube]
	I1030 19:45:45.019172  446965 provision.go:177] copyRemoteCerts
	I1030 19:45:45.019234  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:45.019265  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.022052  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022402  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.022435  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022590  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.022788  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.022969  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.023123  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.104733  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:45.128256  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:45:45.150758  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:45:45.173233  446965 provision.go:87] duration metric: took 401.450922ms to configureAuth
	I1030 19:45:45.173268  446965 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:45.173465  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:45.173562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.176259  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.176698  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176826  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.177025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177190  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177364  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.177554  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.177724  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.177737  446965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:45.396562  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:45.396593  446965 machine.go:96] duration metric: took 976.740759ms to provisionDockerMachine
	I1030 19:45:45.396606  446965 start.go:293] postStartSetup for "embed-certs-042402" (driver="kvm2")
	I1030 19:45:45.396616  446965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:45.396644  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.397007  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:45.397048  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.399581  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.399930  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.399955  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.400045  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.400219  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.400373  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.400483  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.481722  446965 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:45.487207  446965 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:45.487231  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:45.487304  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:45.487398  446965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:45.487516  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:45.500340  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:45.524930  446965 start.go:296] duration metric: took 128.310254ms for postStartSetup
	I1030 19:45:45.524972  446965 fix.go:56] duration metric: took 19.709339085s for fixHost
	I1030 19:45:45.524993  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.527426  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527751  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.527775  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.528145  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528326  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528450  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.528591  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.528804  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.528815  446965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:45.630961  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317545.604586107
	
	I1030 19:45:45.630997  446965 fix.go:216] guest clock: 1730317545.604586107
	I1030 19:45:45.631020  446965 fix.go:229] Guest: 2024-10-30 19:45:45.604586107 +0000 UTC Remote: 2024-10-30 19:45:45.524975841 +0000 UTC m=+302.540999350 (delta=79.610266ms)
	I1030 19:45:45.631054  446965 fix.go:200] guest clock delta is within tolerance: 79.610266ms
	I1030 19:45:45.631062  446965 start.go:83] releasing machines lock for "embed-certs-042402", held for 19.81546348s
	I1030 19:45:45.631109  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.631396  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:45.634114  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634524  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.634558  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634739  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635353  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635646  446965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:45.635692  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.635746  446965 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:45.635775  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.638260  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638639  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.638694  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638718  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639108  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.639128  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.639160  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639260  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639371  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639440  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639509  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.639581  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639723  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.747515  446965 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:45.754851  446965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:45.904471  446965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:45.911348  446965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:45.911428  446965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:45.928273  446965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:45.928299  446965 start.go:495] detecting cgroup driver to use...
	I1030 19:45:45.928381  446965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:45.949100  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:45.963284  446965 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:45.963362  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:45.976952  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:45.991367  446965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:46.104670  446965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:46.254049  446965 docker.go:233] disabling docker service ...
	I1030 19:45:46.254130  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:46.273226  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:46.290211  446965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:46.491658  446965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:46.637447  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:46.654517  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:46.679786  446965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:46.679879  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.695487  446965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:46.695570  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.708974  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.724847  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.736912  446965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:46.749015  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.761190  446965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.780198  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.790865  446965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:46.800950  446965 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:46.801029  446965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:46.814792  446965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:46.825490  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:46.952367  446965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:47.054874  446965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:47.054962  446965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:47.061036  446965 start.go:563] Will wait 60s for crictl version
	I1030 19:45:47.061105  446965 ssh_runner.go:195] Run: which crictl
	I1030 19:45:47.064917  446965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:47.101690  446965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:47.101796  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.131286  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.166314  446965 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:47.167861  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:47.171097  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171438  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:47.171466  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171737  446965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:47.177796  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:47.191930  446965 kubeadm.go:883] updating cluster {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:47.192090  446965 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:47.192149  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:47.231586  446965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:47.231672  446965 ssh_runner.go:195] Run: which lz4
	I1030 19:45:47.236190  446965 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:47.240803  446965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:47.240888  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:45.386683  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:47.386771  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:48.387313  446887 node_ready.go:49] node "default-k8s-diff-port-768989" has status "Ready":"True"
	I1030 19:45:48.387344  446887 node_ready.go:38] duration metric: took 7.005318984s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:48.387359  446887 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:48.395198  446887 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401276  446887 pod_ready.go:93] pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:48.401306  446887 pod_ready.go:82] duration metric: took 6.071305ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401321  446887 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:47.003397  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:45:47.004281  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.004710  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.004787  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.004695  448432 retry.go:31] will retry after 234.659459ms: waiting for machine to come up
	I1030 19:45:47.241308  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.241838  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.241863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.241802  448432 retry.go:31] will retry after 350.804975ms: waiting for machine to come up
	I1030 19:45:47.594533  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.595106  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.595139  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.595044  448432 retry.go:31] will retry after 448.637889ms: waiting for machine to come up
	I1030 19:45:48.045858  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.046358  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.046386  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.046315  448432 retry.go:31] will retry after 543.947609ms: waiting for machine to come up
	I1030 19:45:48.592474  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.592908  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.592937  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.592875  448432 retry.go:31] will retry after 744.106735ms: waiting for machine to come up
	I1030 19:45:49.338345  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:49.338833  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:49.338857  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:49.338795  448432 retry.go:31] will retry after 927.743369ms: waiting for machine to come up
	I1030 19:45:50.267844  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:50.268359  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:50.268390  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:50.268324  448432 retry.go:31] will retry after 829.540351ms: waiting for machine to come up
	I1030 19:45:51.099379  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:51.099863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:51.099893  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:51.099820  448432 retry.go:31] will retry after 898.768304ms: waiting for machine to come up
	I1030 19:45:48.672337  446965 crio.go:462] duration metric: took 1.436158626s to copy over tarball
	I1030 19:45:48.672439  446965 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:50.859055  446965 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186572123s)
	I1030 19:45:50.859101  446965 crio.go:469] duration metric: took 2.186725028s to extract the tarball
	I1030 19:45:50.859113  446965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:50.896570  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:50.946526  446965 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:50.946558  446965 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:50.946567  446965 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.31.2 crio true true} ...
	I1030 19:45:50.946668  446965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-042402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:50.946748  446965 ssh_runner.go:195] Run: crio config
	I1030 19:45:50.992305  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:50.992337  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:50.992348  446965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:50.992374  446965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-042402 NodeName:embed-certs-042402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:50.992530  446965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-042402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:50.992616  446965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:51.002586  446965 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:51.002668  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:51.012058  446965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1030 19:45:51.028645  446965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:51.044912  446965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1030 19:45:51.060991  446965 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:51.064808  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:51.076790  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:51.205861  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:51.224763  446965 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402 for IP: 192.168.61.235
	I1030 19:45:51.224791  446965 certs.go:194] generating shared ca certs ...
	I1030 19:45:51.224812  446965 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:51.224986  446965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:51.225046  446965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:51.225059  446965 certs.go:256] generating profile certs ...
	I1030 19:45:51.225175  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/client.key
	I1030 19:45:51.225256  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key.f6f7691e
	I1030 19:45:51.225314  446965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key
	I1030 19:45:51.225469  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:51.225518  446965 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:51.225540  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:51.225574  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:51.225612  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:51.225651  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:51.225714  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:51.226718  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:51.278345  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:51.308707  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:51.349986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:51.382176  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1030 19:45:51.426538  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 19:45:51.457131  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:51.481165  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:45:51.505285  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:51.533986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:51.562660  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:51.586002  446965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:51.602544  446965 ssh_runner.go:195] Run: openssl version
	I1030 19:45:51.608479  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:51.620650  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625243  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625294  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.631138  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:51.643167  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:51.655128  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659528  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659600  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.665370  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:51.676314  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:51.687386  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692170  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692228  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.697897  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:51.709561  446965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:51.715357  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:51.723291  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:51.731362  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:51.739724  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:51.747383  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:51.753472  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:51.759462  446965 kubeadm.go:392] StartCluster: {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:51.759605  446965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:51.759702  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.806863  446965 cri.go:89] found id: ""
	I1030 19:45:51.806956  446965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:51.818195  446965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:51.818218  446965 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:51.818274  446965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:51.828762  446965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:51.830149  446965 kubeconfig.go:125] found "embed-certs-042402" server: "https://192.168.61.235:8443"
	I1030 19:45:51.832269  446965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:51.842769  446965 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.235
	I1030 19:45:51.842808  446965 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:51.842823  446965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:51.842889  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.887128  446965 cri.go:89] found id: ""
	I1030 19:45:51.887209  446965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:51.911918  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:51.922685  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:51.922714  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:51.922770  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:45:51.935548  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:51.935620  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:51.948635  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:45:51.961647  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:51.961745  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:51.975880  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:45:51.986852  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:51.986922  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:52.001290  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:45:52.015249  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:52.015333  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:52.026657  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:52.038560  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:52.167697  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:50.408274  446887 pod_ready.go:103] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:51.407818  446887 pod_ready.go:93] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.407850  446887 pod_ready.go:82] duration metric: took 3.006520689s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.407865  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413452  446887 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.413481  446887 pod_ready.go:82] duration metric: took 5.607077ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413495  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:52.000678  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:52.001196  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:52.001235  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:52.001148  448432 retry.go:31] will retry after 1.750749509s: waiting for machine to come up
	I1030 19:45:53.753607  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:53.754013  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:53.754038  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:53.753950  448432 retry.go:31] will retry after 1.537350682s: waiting for machine to come up
	I1030 19:45:55.293910  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:55.294396  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:55.294427  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:55.294336  448432 retry.go:31] will retry after 2.151521323s: waiting for machine to come up
	I1030 19:45:53.477258  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.309509141s)
	I1030 19:45:53.477309  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.696850  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.768419  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.863913  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:53.864018  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.364235  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.864820  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.887333  446965 api_server.go:72] duration metric: took 1.023419155s to wait for apiserver process to appear ...
	I1030 19:45:54.887363  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:54.887399  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:54.887929  446965 api_server.go:269] stopped: https://192.168.61.235:8443/healthz: Get "https://192.168.61.235:8443/healthz": dial tcp 192.168.61.235:8443: connect: connection refused
	I1030 19:45:55.388396  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.610916  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:57.610951  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:57.610972  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.745722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.745782  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:57.887887  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.895296  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.895352  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:54.167893  446887 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:54.920921  446887 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.920954  446887 pod_ready.go:82] duration metric: took 3.507449937s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.920974  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927123  446887 pod_ready.go:93] pod "kube-proxy-tsr5q" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.927150  446887 pod_ready.go:82] duration metric: took 6.167749ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927164  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932513  446887 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.932540  446887 pod_ready.go:82] duration metric: took 5.367579ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932557  446887 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:56.939174  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.388076  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.393192  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:58.393235  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:58.887710  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.891923  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:45:58.897783  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:58.897816  446965 api_server.go:131] duration metric: took 4.010443495s to wait for apiserver health ...
	I1030 19:45:58.897836  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:58.897844  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:58.899669  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:57.447894  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:57.448365  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:57.448392  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:57.448320  448432 retry.go:31] will retry after 2.439938206s: waiting for machine to come up
	I1030 19:45:59.889685  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:59.890166  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:59.890205  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:59.890113  448432 retry.go:31] will retry after 3.836080386s: waiting for machine to come up
	I1030 19:45:58.901122  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:58.924765  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:58.946342  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:58.956378  446965 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:58.956412  446965 system_pods.go:61] "coredns-7c65d6cfc9-tv6kc" [d752975e-e126-4d22-9b35-b9f57d1170b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:58.956419  446965 system_pods.go:61] "etcd-embed-certs-042402" [fa9b90f6-82b2-448a-ad86-9cbba45a4c2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:58.956427  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [48af3136-74d9-4062-bb9a-e48dafd311a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:58.956436  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [0ae60724-6634-464a-af2f-e08148fb3eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:58.956445  446965 system_pods.go:61] "kube-proxy-qwjr9" [309ee447-8d52-49e7-a805-2b7c0af2a3bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 19:45:58.956450  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [f82ff11e-8305-4d05-b370-fd89693e5ad1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:58.956454  446965 system_pods.go:61] "metrics-server-6867b74b74-4x9t6" [1160789d-9462-4d1d-9f84-5ded8394bd4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:58.956459  446965 system_pods.go:61] "storage-provisioner" [d1559440-b14a-4c2a-a52e-ba39afb01f94] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 19:45:58.956465  446965 system_pods.go:74] duration metric: took 10.103898ms to wait for pod list to return data ...
	I1030 19:45:58.956473  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:58.960150  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:58.960182  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:58.960195  446965 node_conditions.go:105] duration metric: took 3.712942ms to run NodePressure ...
	I1030 19:45:58.960219  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:59.284558  446965 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289073  446965 kubeadm.go:739] kubelet initialised
	I1030 19:45:59.289095  446965 kubeadm.go:740] duration metric: took 4.508144ms waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289104  446965 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:59.293538  446965 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:01.298780  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.940597  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:01.439118  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.011617  446736 start.go:364] duration metric: took 52.494265895s to acquireMachinesLock for "no-preload-960512"
	I1030 19:46:05.011674  446736 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:46:05.011683  446736 fix.go:54] fixHost starting: 
	I1030 19:46:05.012022  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:05.012087  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:05.029067  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I1030 19:46:05.029484  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:05.030010  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:05.030039  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:05.030461  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:05.030690  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:05.030854  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:05.032380  446736 fix.go:112] recreateIfNeeded on no-preload-960512: state=Stopped err=<nil>
	I1030 19:46:05.032408  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	W1030 19:46:05.032566  446736 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:46:05.035693  446736 out.go:177] * Restarting existing kvm2 VM for "no-preload-960512" ...
	I1030 19:46:03.727617  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728028  447486 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:46:03.728046  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:46:03.728062  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728565  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:46:03.728600  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:46:03.728616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.728639  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | skip adding static IP to network mk-old-k8s-version-516975 - found existing host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"}
	I1030 19:46:03.728657  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:46:03.730754  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731085  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.731121  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731145  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:46:03.731212  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:46:03.731252  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:03.731275  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:46:03.731289  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:46:03.862423  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:03.862832  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:46:03.863519  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:03.865977  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866262  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.866297  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866512  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:46:03.866755  447486 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:03.866783  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:03.866994  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.869079  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869384  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.869410  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869603  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.869787  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.869949  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.870102  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.870243  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.870468  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.870481  447486 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:03.982986  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:03.983018  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983285  447486 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:46:03.983319  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983502  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.986203  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986576  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.986615  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986765  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.986983  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987126  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987258  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.987419  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.987696  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.987719  447486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:46:04.112692  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:46:04.112719  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.115948  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116283  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.116309  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116482  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.116667  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116842  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116966  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.117104  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.117275  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.117290  447486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:04.235988  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:04.236032  447486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:04.236098  447486 buildroot.go:174] setting up certificates
	I1030 19:46:04.236111  447486 provision.go:84] configureAuth start
	I1030 19:46:04.236124  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:04.236500  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:04.239328  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.239707  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.239739  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.240009  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.242118  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242440  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.242505  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242683  447486 provision.go:143] copyHostCerts
	I1030 19:46:04.242766  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:04.242787  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:04.242847  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:04.242972  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:04.242986  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:04.243011  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:04.243072  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:04.243079  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:04.243095  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:04.243153  447486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:46:04.355003  447486 provision.go:177] copyRemoteCerts
	I1030 19:46:04.355061  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:04.355092  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.357788  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358153  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.358191  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358397  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.358630  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.358809  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.358970  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.446614  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:04.473708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:46:04.497721  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:46:04.521806  447486 provision.go:87] duration metric: took 285.682041ms to configureAuth
	I1030 19:46:04.521836  447486 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:04.521999  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:46:04.522072  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.524616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525034  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.525065  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525282  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.525452  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525616  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.525916  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.526129  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.526145  447486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:04.766663  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:04.766697  447486 machine.go:96] duration metric: took 899.924211ms to provisionDockerMachine
	I1030 19:46:04.766709  447486 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:46:04.766720  447486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:04.766745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:04.767081  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:04.767114  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.769995  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770401  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.770428  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770580  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.770762  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.770973  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.771132  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.858006  447486 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:04.862295  447486 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:04.862324  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:04.862387  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:04.862475  447486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:04.862612  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:04.872541  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:04.896306  447486 start.go:296] duration metric: took 129.577956ms for postStartSetup
	I1030 19:46:04.896360  447486 fix.go:56] duration metric: took 19.265077419s for fixHost
	I1030 19:46:04.896383  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.899009  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899397  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.899429  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899538  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.899739  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.899906  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.900101  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.900271  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.900510  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.900525  447486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:05.011439  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317564.967936408
	
	I1030 19:46:05.011464  447486 fix.go:216] guest clock: 1730317564.967936408
	I1030 19:46:05.011472  447486 fix.go:229] Guest: 2024-10-30 19:46:04.967936408 +0000 UTC Remote: 2024-10-30 19:46:04.896364572 +0000 UTC m=+233.135558535 (delta=71.571836ms)
	I1030 19:46:05.011516  447486 fix.go:200] guest clock delta is within tolerance: 71.571836ms
	I1030 19:46:05.011525  447486 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 19.380292064s
	I1030 19:46:05.011552  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.011853  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:05.014722  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015072  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.015100  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015225  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.015808  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016002  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016107  447486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:05.016155  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.016265  447486 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:05.016296  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.018976  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019189  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019326  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019370  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019517  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019604  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019632  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019708  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.019830  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019918  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.019995  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.020077  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.020157  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.020295  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.100852  447486 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:05.127673  447486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:05.279889  447486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:05.285900  447486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:05.285976  447486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:05.304763  447486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:05.304791  447486 start.go:495] detecting cgroup driver to use...
	I1030 19:46:05.304862  447486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:05.325729  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:05.343047  447486 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:05.343128  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:05.358748  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:05.374769  447486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:05.492589  447486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:05.639943  447486 docker.go:233] disabling docker service ...
	I1030 19:46:05.640039  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:05.655449  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:05.669688  447486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:05.814658  447486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:05.957944  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:05.972122  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:05.990577  447486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:46:05.990653  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.000834  447486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:06.000907  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.011678  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.022051  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.032515  447486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:06.043296  447486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:06.053123  447486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:06.053170  447486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:06.067625  447486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:06.081306  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:06.221181  447486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:06.321848  447486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:06.321926  447486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:06.329697  447486 start.go:563] Will wait 60s for crictl version
	I1030 19:46:06.329757  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:06.333980  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:06.381198  447486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:06.381290  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.410365  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.442329  447486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:46:06.443471  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:06.446233  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446621  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:06.446653  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446822  447486 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:06.451216  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:06.464477  447486 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:06.464607  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:46:06.464668  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:06.513123  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:06.513205  447486 ssh_runner.go:195] Run: which lz4
	I1030 19:46:06.517252  447486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:46:06.521358  447486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:46:06.521384  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:46:03.300213  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.301139  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.303015  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:03.939240  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.940212  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.942062  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.037179  446736 main.go:141] libmachine: (no-preload-960512) Calling .Start
	I1030 19:46:05.037388  446736 main.go:141] libmachine: (no-preload-960512) Ensuring networks are active...
	I1030 19:46:05.038384  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network default is active
	I1030 19:46:05.038793  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network mk-no-preload-960512 is active
	I1030 19:46:05.039208  446736 main.go:141] libmachine: (no-preload-960512) Getting domain xml...
	I1030 19:46:05.040083  446736 main.go:141] libmachine: (no-preload-960512) Creating domain...
	I1030 19:46:06.366674  446736 main.go:141] libmachine: (no-preload-960512) Waiting to get IP...
	I1030 19:46:06.367568  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.368016  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.368083  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.367984  448568 retry.go:31] will retry after 216.900908ms: waiting for machine to come up
	I1030 19:46:06.586638  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.587182  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.587213  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.587121  448568 retry.go:31] will retry after 319.082011ms: waiting for machine to come up
	I1030 19:46:06.907974  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.908650  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.908683  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.908581  448568 retry.go:31] will retry after 418.339306ms: waiting for machine to come up
	I1030 19:46:07.328241  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.329035  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.329065  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.328988  448568 retry.go:31] will retry after 523.624135ms: waiting for machine to come up
	I1030 19:46:07.855234  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.855944  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.855970  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.855849  448568 retry.go:31] will retry after 556.06146ms: waiting for machine to come up
	I1030 19:46:08.413474  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:08.414059  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:08.414098  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:08.413947  448568 retry.go:31] will retry after 713.043389ms: waiting for machine to come up
	I1030 19:46:09.128274  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:09.128737  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:09.128762  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:09.128689  448568 retry.go:31] will retry after 1.096111238s: waiting for machine to come up
	I1030 19:46:08.144772  447486 crio.go:462] duration metric: took 1.627547543s to copy over tarball
	I1030 19:46:08.144845  447486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:46:11.104192  447486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959302647s)
	I1030 19:46:11.104228  447486 crio.go:469] duration metric: took 2.959426051s to extract the tarball
	I1030 19:46:11.104240  447486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:46:11.146584  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:11.183766  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:11.183797  447486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:11.183889  447486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.183917  447486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.183932  447486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.183968  447486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.184087  447486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.183972  447486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:46:11.183969  447486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.183928  447486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.185976  447486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.186001  447486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:46:11.186043  447486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.186053  447486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.186046  447486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.185977  447486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.186108  447486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.186150  447486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.348134  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391191  447486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:46:11.391327  447486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391399  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.396693  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.400062  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.406656  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:46:11.410534  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.410590  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.441896  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.460400  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.482465  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.554431  447486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:46:11.554480  447486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.554549  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.610376  447486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:46:11.610424  447486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:46:11.610471  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616060  447486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:46:11.616104  447486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.616153  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616177  447486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:46:11.616217  447486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.616282  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.617473  447486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:46:11.617502  447486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.617535  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652124  447486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:46:11.652185  447486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.652228  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.652233  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652237  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.652331  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.652376  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.652433  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.652483  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.798844  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.798859  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:46:11.798873  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.798949  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.799075  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.799179  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.799182  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:08.303450  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.303482  446965 pod_ready.go:82] duration metric: took 9.009918893s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.303498  446965 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312186  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.312213  446965 pod_ready.go:82] duration metric: took 8.706192ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312228  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:10.320161  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.439107  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:12.439663  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.226842  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:10.227315  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:10.227346  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:10.227261  448568 retry.go:31] will retry after 1.165335625s: waiting for machine to come up
	I1030 19:46:11.394231  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:11.394817  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:11.394851  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:11.394763  448568 retry.go:31] will retry after 1.292571083s: waiting for machine to come up
	I1030 19:46:12.688486  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:12.688919  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:12.688965  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:12.688862  448568 retry.go:31] will retry after 1.97645889s: waiting for machine to come up
	I1030 19:46:14.667783  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:14.668245  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:14.668278  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:14.668200  448568 retry.go:31] will retry after 2.020488863s: waiting for machine to come up
	I1030 19:46:11.942258  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.942265  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.942365  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.942352  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.942421  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.946933  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:12.064951  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:46:12.067930  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:12.067990  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:46:12.068057  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:46:12.068078  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:46:12.083122  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:46:12.107265  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:46:13.402970  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:13.551979  447486 cache_images.go:92] duration metric: took 2.368158873s to LoadCachedImages
	W1030 19:46:13.552080  447486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:46:13.552096  447486 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:46:13.552211  447486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:13.552276  447486 ssh_runner.go:195] Run: crio config
	I1030 19:46:13.605982  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:46:13.606008  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:13.606020  447486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:13.606049  447486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:46:13.606223  447486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:13.606299  447486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:46:13.616954  447486 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:13.617034  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:13.627440  447486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:46:13.644821  447486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:13.662070  447486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:46:13.679198  447486 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:13.682992  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:13.697879  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:13.819975  447486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:13.838669  447486 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:46:13.838695  447486 certs.go:194] generating shared ca certs ...
	I1030 19:46:13.838716  447486 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:13.838888  447486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:13.838946  447486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:13.838962  447486 certs.go:256] generating profile certs ...
	I1030 19:46:13.839064  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:46:13.839149  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:46:13.839208  447486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:46:13.839375  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:13.839429  447486 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:13.839442  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:13.839476  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:13.839509  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:13.839545  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:13.839609  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:13.840381  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:13.868947  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:13.923848  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:13.973167  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:14.009333  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:46:14.042397  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:14.073927  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:14.109209  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:46:14.135708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:14.162145  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:14.186176  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:14.210362  447486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:14.228727  447486 ssh_runner.go:195] Run: openssl version
	I1030 19:46:14.234436  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:14.245497  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250026  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250077  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.255727  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:14.266674  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:14.277813  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282378  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282435  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.288338  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:14.300057  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:14.312295  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317488  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317555  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.323518  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:14.335182  447486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:14.339998  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:14.346145  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:14.352474  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:14.358687  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:14.364275  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:14.370038  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:14.376051  447486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:14.376144  447486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:14.376187  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.423395  447486 cri.go:89] found id: ""
	I1030 19:46:14.423477  447486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:14.435404  447486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:14.435485  447486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:14.435558  447486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:14.448035  447486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:14.448911  447486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:14.449557  447486 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-516975" cluster setting kubeconfig missing "old-k8s-version-516975" context setting]
	I1030 19:46:14.450419  447486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:14.452252  447486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:14.462634  447486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I1030 19:46:14.462676  447486 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:14.462693  447486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:14.462750  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.508286  447486 cri.go:89] found id: ""
	I1030 19:46:14.508380  447486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:14.527996  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:14.539011  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:14.539037  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:14.539094  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:14.550159  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:14.550243  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:14.561350  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:14.571353  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:14.571430  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:14.584480  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.598307  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:14.598400  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.611632  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:14.621644  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:14.621705  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:14.632161  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:14.642295  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:14.783130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.694839  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.923329  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.052124  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.143607  447486 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:16.143710  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:16.643943  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:13.245727  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:13.702440  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.702472  446965 pod_ready.go:82] duration metric: took 5.390235543s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.702497  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948519  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.948549  446965 pod_ready.go:82] duration metric: took 246.042214ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948565  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958077  446965 pod_ready.go:93] pod "kube-proxy-qwjr9" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.958108  446965 pod_ready.go:82] duration metric: took 9.534813ms for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958122  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974906  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.974931  446965 pod_ready.go:82] duration metric: took 16.800547ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974944  446965 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:15.982433  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:17.983261  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:14.440176  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.939769  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.690435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:16.690908  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:16.690997  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:16.690904  448568 retry.go:31] will retry after 2.729556206s: waiting for machine to come up
	I1030 19:46:19.423740  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:19.424246  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:19.424271  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:19.424195  448568 retry.go:31] will retry after 2.822049517s: waiting for machine to come up
	I1030 19:46:17.144678  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.644772  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.144037  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.644437  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.144273  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.643801  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.144200  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.644764  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.143898  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.643960  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.481213  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.981619  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:19.438946  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:21.938706  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.247395  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:22.247840  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:22.247869  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:22.247813  448568 retry.go:31] will retry after 5.243633747s: waiting for machine to come up
	I1030 19:46:22.144625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.644446  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.144207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.644001  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.143787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.644166  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.144397  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.644654  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.144214  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.644275  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.482032  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.981111  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:23.940402  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:26.439369  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.494630  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495107  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has current primary IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495146  446736 main.go:141] libmachine: (no-preload-960512) Found IP for machine: 192.168.72.132
	I1030 19:46:27.495159  446736 main.go:141] libmachine: (no-preload-960512) Reserving static IP address...
	I1030 19:46:27.495588  446736 main.go:141] libmachine: (no-preload-960512) Reserved static IP address: 192.168.72.132
	I1030 19:46:27.495612  446736 main.go:141] libmachine: (no-preload-960512) Waiting for SSH to be available...
	I1030 19:46:27.495634  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.495664  446736 main.go:141] libmachine: (no-preload-960512) DBG | skip adding static IP to network mk-no-preload-960512 - found existing host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"}
	I1030 19:46:27.495678  446736 main.go:141] libmachine: (no-preload-960512) DBG | Getting to WaitForSSH function...
	I1030 19:46:27.497679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498051  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.498083  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498231  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH client type: external
	I1030 19:46:27.498273  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa (-rw-------)
	I1030 19:46:27.498316  446736 main.go:141] libmachine: (no-preload-960512) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:27.498344  446736 main.go:141] libmachine: (no-preload-960512) DBG | About to run SSH command:
	I1030 19:46:27.498355  446736 main.go:141] libmachine: (no-preload-960512) DBG | exit 0
	I1030 19:46:27.626476  446736 main.go:141] libmachine: (no-preload-960512) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:27.626850  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetConfigRaw
	I1030 19:46:27.627519  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:27.629913  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630288  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.630326  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630561  446736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/config.json ...
	I1030 19:46:27.630778  446736 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:27.630801  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:27.631021  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.633457  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.633849  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.633880  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.634032  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.634200  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634393  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.634741  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.634940  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.634952  446736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:27.743135  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:27.743167  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743475  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:46:27.743516  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743717  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.746369  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746726  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.746758  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746928  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.747114  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747266  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747380  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.747509  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.747740  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.747759  446736 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-960512 && echo "no-preload-960512" | sudo tee /etc/hostname
	I1030 19:46:27.872871  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-960512
	
	I1030 19:46:27.872899  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.875533  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.875867  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.875908  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.876072  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.876274  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876546  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876690  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.876851  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.877082  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.877099  446736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-960512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-960512/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-960512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:27.999551  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:27.999617  446736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:27.999654  446736 buildroot.go:174] setting up certificates
	I1030 19:46:27.999667  446736 provision.go:84] configureAuth start
	I1030 19:46:27.999689  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.999998  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.002874  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003285  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.003317  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003474  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.005987  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006376  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.006418  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006545  446736 provision.go:143] copyHostCerts
	I1030 19:46:28.006620  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:28.006639  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:28.006707  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:28.006846  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:28.006859  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:28.006898  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:28.006983  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:28.006993  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:28.007023  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:28.007102  446736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.no-preload-960512 san=[127.0.0.1 192.168.72.132 localhost minikube no-preload-960512]
	I1030 19:46:28.317424  446736 provision.go:177] copyRemoteCerts
	I1030 19:46:28.317502  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:28.317537  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.320089  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320387  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.320419  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.320776  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.320963  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.321116  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.409344  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:46:28.434874  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:28.459903  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:46:28.486949  446736 provision.go:87] duration metric: took 487.261556ms to configureAuth
	I1030 19:46:28.486981  446736 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:28.487219  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:28.487322  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.489873  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490180  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.490223  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490349  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.490561  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490719  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490827  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.491003  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.491199  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.491216  446736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:28.727045  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:28.727081  446736 machine.go:96] duration metric: took 1.096287528s to provisionDockerMachine
	I1030 19:46:28.727095  446736 start.go:293] postStartSetup for "no-preload-960512" (driver="kvm2")
	I1030 19:46:28.727106  446736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:28.727125  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.727460  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:28.727490  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.730071  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730445  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.730479  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730652  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.730858  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.731010  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.731197  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.817529  446736 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:28.822263  446736 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:28.822299  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:28.822394  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:28.822517  446736 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:28.822647  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:28.832488  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:28.858165  446736 start.go:296] duration metric: took 131.055053ms for postStartSetup
	I1030 19:46:28.858211  446736 fix.go:56] duration metric: took 23.84652817s for fixHost
	I1030 19:46:28.858235  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.861136  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861480  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.861513  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861819  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.862059  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862224  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862373  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.862582  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.862786  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.862797  446736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:28.975448  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317588.951806388
	
	I1030 19:46:28.975479  446736 fix.go:216] guest clock: 1730317588.951806388
	I1030 19:46:28.975489  446736 fix.go:229] Guest: 2024-10-30 19:46:28.951806388 +0000 UTC Remote: 2024-10-30 19:46:28.858215114 +0000 UTC m=+358.930371017 (delta=93.591274ms)
	I1030 19:46:28.975521  446736 fix.go:200] guest clock delta is within tolerance: 93.591274ms
	I1030 19:46:28.975529  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 23.963879546s
	I1030 19:46:28.975555  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.975849  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.978813  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979310  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.979341  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979608  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980197  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980429  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980522  446736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:28.980567  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.980682  446736 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:28.980710  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.984058  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984208  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984410  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984582  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984613  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984636  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984782  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.984798  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984966  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.984974  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.985121  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.985187  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.985260  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:29.063734  446736 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:29.087821  446736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:29.236289  446736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:29.242997  446736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:29.243088  446736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:29.260802  446736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:29.260836  446736 start.go:495] detecting cgroup driver to use...
	I1030 19:46:29.260930  446736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:29.279572  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:29.293359  446736 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:29.293423  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:29.306417  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:29.319617  446736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:29.440023  446736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:29.585541  446736 docker.go:233] disabling docker service ...
	I1030 19:46:29.585630  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:29.600459  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:29.613611  446736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:29.752666  446736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:29.880152  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:29.893912  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:29.913099  446736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:46:29.913160  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.923800  446736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:29.923882  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.934880  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.946088  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.956644  446736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:29.967199  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.978863  446736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.996225  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:30.006604  446736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:30.015954  446736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:30.016017  446736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:30.029194  446736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:30.041316  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:30.161438  446736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:30.257137  446736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:30.257209  446736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:30.261981  446736 start.go:563] Will wait 60s for crictl version
	I1030 19:46:30.262052  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.266275  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:30.305128  446736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:30.305228  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.335445  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.367026  446736 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:46:27.143768  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.644294  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.143819  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.643783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.144405  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.643941  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.644787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.143873  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.643857  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.982162  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:32.480878  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:28.939126  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.939780  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.368355  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:30.371260  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371651  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:30.371679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371922  446736 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:30.376282  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:30.389078  446736 kubeadm.go:883] updating cluster {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:30.389193  446736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:46:30.389228  446736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:30.423375  446736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:46:30.423402  446736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:30.423508  446736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.423562  446736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.423578  446736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.423595  446736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.423536  446736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.423634  446736 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424979  446736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.424988  446736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.424996  446736 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424987  446736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.425021  446736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.425036  446736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.425029  446736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.425061  446736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.612665  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.618602  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1030 19:46:30.636563  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.680808  446736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1030 19:46:30.680858  446736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.680911  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.749318  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.750405  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.751514  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.752746  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.768614  446736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1030 19:46:30.768663  446736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.768714  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.768723  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.881778  446736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1030 19:46:30.881811  446736 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1030 19:46:30.881821  446736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.881844  446736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.881862  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.881883  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.884827  446736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1030 19:46:30.884861  446736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.884901  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891812  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.891882  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.891907  446736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1030 19:46:30.891940  446736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.891981  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891986  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.892142  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.893781  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.992346  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.992372  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.992404  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.995602  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.995730  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.995786  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.123892  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1030 19:46:31.123996  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:31.124018  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.132177  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.132209  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:31.132311  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:31.132335  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.220011  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1030 19:46:31.220043  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220100  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220224  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1030 19:46:31.220329  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:31.262583  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1030 19:46:31.262685  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.262698  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:31.269015  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1030 19:46:31.269117  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:31.269710  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1030 19:46:31.269793  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:32.667341  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.216743  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.99661544s)
	I1030 19:46:33.216787  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1030 19:46:33.216787  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.996433716s)
	I1030 19:46:33.216820  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1030 19:46:33.216829  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216840  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.95412356s)
	I1030 19:46:33.216872  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1030 19:46:33.216884  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216925  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2: (1.954216284s)
	I1030 19:46:33.216964  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1030 19:46:33.216989  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.947854262s)
	I1030 19:46:33.217014  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1030 19:46:33.217027  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.947220506s)
	I1030 19:46:33.217040  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1030 19:46:33.217059  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:33.217140  446736 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1030 19:46:33.217178  446736 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.217222  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:32.144229  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.644079  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.643950  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.143888  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.643861  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.144210  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.644677  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.644549  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.481488  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:36.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:33.438659  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:37.440028  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.577178  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.360267806s)
	I1030 19:46:35.577219  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1030 19:46:35.577227  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.360144583s)
	I1030 19:46:35.577243  446736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.577252  446736 ssh_runner.go:235] Completed: which crictl: (2.360017291s)
	I1030 19:46:35.577259  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1030 19:46:35.577305  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:35.577309  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.615490  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492071  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.914649003s)
	I1030 19:46:39.492116  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1030 19:46:39.492142  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.876615301s)
	I1030 19:46:39.492211  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492148  446736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.492295  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.535258  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 19:46:39.535417  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:37.144681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.643833  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.143783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.644359  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.144745  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.644625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.144535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.643881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.144754  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.644070  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.302627  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.480981  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:39.940272  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:42.439827  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.566095  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.073767908s)
	I1030 19:46:41.566140  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1030 19:46:41.566167  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566169  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.030723752s)
	I1030 19:46:41.566210  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566224  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1030 19:46:43.628473  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.06223599s)
	I1030 19:46:43.628500  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1030 19:46:43.628525  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:43.628570  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:42.144672  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.644533  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.144320  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.644574  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.144465  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.644428  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.143785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.643767  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.144467  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.644496  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.481495  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.481844  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.982318  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:44.940061  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.439131  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.079808  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451207821s)
	I1030 19:46:45.079843  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1030 19:46:45.079870  446736 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:45.079918  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:46.026472  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 19:46:46.026538  446736 cache_images.go:123] Successfully loaded all cached images
	I1030 19:46:46.026547  446736 cache_images.go:92] duration metric: took 15.603128567s to LoadCachedImages
	I1030 19:46:46.026562  446736 kubeadm.go:934] updating node { 192.168.72.132 8443 v1.31.2 crio true true} ...
	I1030 19:46:46.026722  446736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-960512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:46.026819  446736 ssh_runner.go:195] Run: crio config
	I1030 19:46:46.080342  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:46.080367  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:46.080376  446736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:46.080399  446736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-960512 NodeName:no-preload-960512 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:46:46.080574  446736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-960512"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:46.080645  446736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:46:46.091323  446736 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:46.091400  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:46.100320  446736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1030 19:46:46.117369  446736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:46.133667  446736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1030 19:46:46.157251  446736 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:46.161543  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:46.173451  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:46.303532  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:46.321855  446736 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512 for IP: 192.168.72.132
	I1030 19:46:46.321883  446736 certs.go:194] generating shared ca certs ...
	I1030 19:46:46.321905  446736 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:46.322108  446736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:46.322171  446736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:46.322189  446736 certs.go:256] generating profile certs ...
	I1030 19:46:46.322294  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/client.key
	I1030 19:46:46.322381  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key.378d6029
	I1030 19:46:46.322436  446736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key
	I1030 19:46:46.322609  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:46.322649  446736 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:46.322661  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:46.322692  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:46.322727  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:46.322756  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:46.322812  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:46.323679  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:46.362339  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:46.396270  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:46.443482  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:46.468142  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:46:46.507418  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:46.534091  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:46.557105  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:46:46.579880  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:46.602665  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:46.625853  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:46.651685  446736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:46.670898  446736 ssh_runner.go:195] Run: openssl version
	I1030 19:46:46.677083  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:46.688814  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693349  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693399  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.699221  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:46.710200  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:46.721001  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725283  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725343  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.730798  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:46.741915  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:46.752767  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757109  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757150  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.762844  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:46.773796  446736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:46.778156  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:46.784099  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:46.789960  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:46.796056  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:46.801880  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:46.807680  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:46.813574  446736 kubeadm.go:392] StartCluster: {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:46.813694  446736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:46.813735  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.856225  446736 cri.go:89] found id: ""
	I1030 19:46:46.856309  446736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:46.866696  446736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:46.866721  446736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:46.866774  446736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:46.876622  446736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:46.877777  446736 kubeconfig.go:125] found "no-preload-960512" server: "https://192.168.72.132:8443"
	I1030 19:46:46.880116  446736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:46.889710  446736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.132
	I1030 19:46:46.889743  446736 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:46.889761  446736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:46.889837  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.927109  446736 cri.go:89] found id: ""
	I1030 19:46:46.927177  446736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:46.944519  446736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:46.954607  446736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:46.954626  446736 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:46.954669  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:46.963987  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:46.964086  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:46.973787  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:46.983447  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:46.983496  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:46.993101  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.003713  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:47.003773  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.013162  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:47.022411  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:47.022479  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:47.031878  446736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:47.041616  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:47.156846  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.637250  446736 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.480364831s)
	I1030 19:46:48.637284  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.836676  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.908664  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.987298  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:48.987411  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.488330  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.143932  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.644228  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.144124  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.643923  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.144466  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.643968  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.144811  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.643785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.144372  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.644019  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.983127  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.482250  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.939257  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.439840  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.988463  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.024092  446736 api_server.go:72] duration metric: took 1.036791371s to wait for apiserver process to appear ...
	I1030 19:46:50.024127  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:46:50.024155  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:50.024711  446736 api_server.go:269] stopped: https://192.168.72.132:8443/healthz: Get "https://192.168.72.132:8443/healthz": dial tcp 192.168.72.132:8443: connect: connection refused
	I1030 19:46:50.524543  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.757497  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:46:52.757537  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:46:52.757558  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.847598  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:52.847638  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.024885  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.030717  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.030749  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.524384  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.531420  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.531459  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.025006  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.030512  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.030545  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.525157  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.529426  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.529453  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.025276  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.029608  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.029634  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.525041  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.529303  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.529339  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:56.024906  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:56.029520  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:46:56.035579  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:46:56.035609  446736 api_server.go:131] duration metric: took 6.011468992s to wait for apiserver health ...
	I1030 19:46:56.035619  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:56.035625  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:56.037524  446736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:46:52.144732  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.644528  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.144074  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.643889  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.143976  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.644535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.144783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.644114  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.144728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.643846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.038963  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:46:56.050330  446736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:46:56.069509  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:46:56.079237  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:46:56.079268  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:46:56.079275  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:46:56.079283  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:46:56.079288  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:46:56.079294  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:46:56.079299  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:46:56.079304  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:46:56.079307  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:46:56.079313  446736 system_pods.go:74] duration metric: took 9.785027ms to wait for pod list to return data ...
	I1030 19:46:56.079327  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:46:56.082617  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:46:56.082644  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:46:56.082658  446736 node_conditions.go:105] duration metric: took 3.325744ms to run NodePressure ...
	I1030 19:46:56.082680  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:56.353123  446736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357714  446736 kubeadm.go:739] kubelet initialised
	I1030 19:46:56.357740  446736 kubeadm.go:740] duration metric: took 4.581883ms waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357755  446736 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:56.362687  446736 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.367124  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367153  446736 pod_ready.go:82] duration metric: took 4.443081ms for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.367165  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367180  446736 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.371747  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371774  446736 pod_ready.go:82] duration metric: took 4.580967ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.371785  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371794  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.375687  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375704  446736 pod_ready.go:82] duration metric: took 3.901023ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.375712  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375718  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.472995  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473036  446736 pod_ready.go:82] duration metric: took 97.300344ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.473047  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473056  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.873717  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873749  446736 pod_ready.go:82] duration metric: took 400.680615ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.873759  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873765  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.273361  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273392  446736 pod_ready.go:82] duration metric: took 399.61983ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.273405  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273415  446736 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.674201  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674236  446736 pod_ready.go:82] duration metric: took 400.809663ms for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.674251  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674260  446736 pod_ready.go:39] duration metric: took 1.31649331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:57.674285  446736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:46:57.687464  446736 ops.go:34] apiserver oom_adj: -16
	I1030 19:46:57.687489  446736 kubeadm.go:597] duration metric: took 10.820761471s to restartPrimaryControlPlane
	I1030 19:46:57.687498  446736 kubeadm.go:394] duration metric: took 10.873934509s to StartCluster
	I1030 19:46:57.687514  446736 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.687586  446736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:57.689255  446736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.689496  446736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:46:57.689574  446736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:46:57.689683  446736 addons.go:69] Setting storage-provisioner=true in profile "no-preload-960512"
	I1030 19:46:57.689706  446736 addons.go:234] Setting addon storage-provisioner=true in "no-preload-960512"
	I1030 19:46:57.689708  446736 addons.go:69] Setting metrics-server=true in profile "no-preload-960512"
	W1030 19:46:57.689719  446736 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:46:57.689727  446736 addons.go:234] Setting addon metrics-server=true in "no-preload-960512"
	W1030 19:46:57.689737  446736 addons.go:243] addon metrics-server should already be in state true
	I1030 19:46:57.689755  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689791  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:57.689761  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689707  446736 addons.go:69] Setting default-storageclass=true in profile "no-preload-960512"
	I1030 19:46:57.689912  446736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-960512"
	I1030 19:46:57.690245  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690258  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690264  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690297  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690303  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690322  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.691365  446736 out.go:177] * Verifying Kubernetes components...
	I1030 19:46:57.692941  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:57.727794  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1030 19:46:57.727877  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I1030 19:46:57.728127  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1030 19:46:57.728276  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728414  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728517  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728861  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.728879  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729032  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729053  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729056  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729064  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729350  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729429  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729452  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.730008  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730051  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.730124  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730362  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.731104  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.734295  446736 addons.go:234] Setting addon default-storageclass=true in "no-preload-960512"
	W1030 19:46:57.734316  446736 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:46:57.734349  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.734742  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.734810  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.747185  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1030 19:46:57.747680  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.748340  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.748360  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.748795  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.749029  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.749722  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I1030 19:46:57.750318  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.754616  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I1030 19:46:57.754666  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.755024  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.755052  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.755555  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.755672  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757159  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.757166  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.757184  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.757504  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757804  446736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:57.758045  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.758089  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.759001  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.759300  446736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:57.759313  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:46:57.759327  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.762134  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762557  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.762582  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762740  446736 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:46:54.485910  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.981415  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:54.939168  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.940263  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:57.762828  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.763037  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.763192  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.763344  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.763936  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:46:57.763953  446736 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:46:57.763970  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.766410  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.766771  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.766795  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.767034  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.767212  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.767385  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.767522  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.776037  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1030 19:46:57.776386  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.776846  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.776864  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.777184  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.777339  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.778829  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.779118  446736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:57.779138  446736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:46:57.779156  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.781325  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781590  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.781615  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781755  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.781895  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.781995  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.782088  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.895549  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:57.913030  446736 node_ready.go:35] waiting up to 6m0s for node "no-preload-960512" to be "Ready" ...
	I1030 19:46:58.008228  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:58.009206  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:46:58.009222  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:46:58.034347  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:58.036620  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:46:58.036646  446736 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:46:58.140489  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:58.140522  446736 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:46:58.181145  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:59.403246  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.368855241s)
	I1030 19:46:59.403317  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395049308s)
	I1030 19:46:59.403331  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403340  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403356  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403369  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403657  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403673  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403681  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403688  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403766  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403770  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.403778  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403790  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403796  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403939  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403954  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404023  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.404059  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404071  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411114  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.411136  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.411365  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411421  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.411437  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513065  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33186887s)
	I1030 19:46:59.513150  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513168  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513455  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513481  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513486  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513491  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513537  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513769  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513797  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513809  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513826  446736 addons.go:475] Verifying addon metrics-server=true in "no-preload-960512"
	I1030 19:46:59.516354  446736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:46:59.517886  446736 addons.go:510] duration metric: took 1.828322965s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:46:59.916839  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.143829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.644245  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.144327  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.644684  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.644799  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.144222  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.644111  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.144268  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.644631  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.982694  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:00.984014  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:59.439638  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:01.939460  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:02.416750  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:47:03.416443  446736 node_ready.go:49] node "no-preload-960512" has status "Ready":"True"
	I1030 19:47:03.416469  446736 node_ready.go:38] duration metric: took 5.503404181s for node "no-preload-960512" to be "Ready" ...
	I1030 19:47:03.416479  446736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:47:03.422219  446736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:02.143881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.644208  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.144411  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.643948  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.644179  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.144791  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.643983  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.143859  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.644436  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.481239  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.481271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.482108  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:04.439288  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:06.439454  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.428589  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.430975  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:09.928214  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.144765  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.644280  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.144381  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.644099  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.144129  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.643864  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.144105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.643752  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.144135  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.644172  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.982150  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.481265  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:08.939357  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.940087  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.430572  446736 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.430598  446736 pod_ready.go:82] duration metric: took 7.008352985s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.430610  446736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436673  446736 pod_ready.go:93] pod "etcd-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.436699  446736 pod_ready.go:82] duration metric: took 6.082545ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436711  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442262  446736 pod_ready.go:93] pod "kube-apiserver-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.442282  446736 pod_ready.go:82] duration metric: took 5.563816ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442292  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446170  446736 pod_ready.go:93] pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.446189  446736 pod_ready.go:82] duration metric: took 3.890123ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446198  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450190  446736 pod_ready.go:93] pod "kube-proxy-fxqqc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.450216  446736 pod_ready.go:82] duration metric: took 4.011125ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450226  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826537  446736 pod_ready.go:93] pod "kube-scheduler-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.826572  446736 pod_ready.go:82] duration metric: took 376.338504ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826587  446736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:12.834756  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.144391  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.644441  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.143916  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.644779  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.644634  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.144050  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.644738  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:16.143957  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:16.144037  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:16.184282  447486 cri.go:89] found id: ""
	I1030 19:47:16.184310  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.184320  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:16.184327  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:16.184403  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:16.225359  447486 cri.go:89] found id: ""
	I1030 19:47:16.225388  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.225397  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:16.225403  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:16.225471  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:16.260591  447486 cri.go:89] found id: ""
	I1030 19:47:16.260625  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.260635  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:16.260641  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:16.260695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:16.299562  447486 cri.go:89] found id: ""
	I1030 19:47:16.299591  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.299602  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:16.299609  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:16.299682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:16.334753  447486 cri.go:89] found id: ""
	I1030 19:47:16.334781  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.334789  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:16.334795  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:16.334877  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:16.371588  447486 cri.go:89] found id: ""
	I1030 19:47:16.371619  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.371628  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:16.371634  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:16.371689  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:16.406668  447486 cri.go:89] found id: ""
	I1030 19:47:16.406699  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.406710  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:16.406718  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:16.406786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:16.443050  447486 cri.go:89] found id: ""
	I1030 19:47:16.443081  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.443096  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:16.443109  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:16.443125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:16.492898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:16.492936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:16.506310  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:16.506343  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:16.637629  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:16.637660  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:16.637677  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:16.709581  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:16.709621  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:14.481660  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:16.981807  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:13.438777  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.439457  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.939606  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.335280  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.833216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.833320  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.253501  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:19.267200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:19.267276  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:19.303608  447486 cri.go:89] found id: ""
	I1030 19:47:19.303641  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.303651  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:19.303658  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:19.303711  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:19.341311  447486 cri.go:89] found id: ""
	I1030 19:47:19.341343  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.341354  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:19.341363  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:19.341427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:19.376949  447486 cri.go:89] found id: ""
	I1030 19:47:19.376977  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.376987  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:19.376996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:19.377075  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:19.414164  447486 cri.go:89] found id: ""
	I1030 19:47:19.414197  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.414209  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:19.414218  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:19.414308  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:19.450637  447486 cri.go:89] found id: ""
	I1030 19:47:19.450671  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.450683  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:19.450692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:19.450761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:19.485315  447486 cri.go:89] found id: ""
	I1030 19:47:19.485345  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.485355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:19.485364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:19.485427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:19.519873  447486 cri.go:89] found id: ""
	I1030 19:47:19.519901  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.519911  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:19.519919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:19.519982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:19.555168  447486 cri.go:89] found id: ""
	I1030 19:47:19.555198  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.555211  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:19.555223  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:19.555239  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:19.607227  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:19.607265  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:19.621465  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:19.621498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:19.700837  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:19.700869  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:19.700882  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:19.774428  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:19.774468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:18.982345  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:21.482165  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.940122  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.439405  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.333449  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.833942  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.314410  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:22.327998  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:22.328083  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:22.365583  447486 cri.go:89] found id: ""
	I1030 19:47:22.365611  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.365622  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:22.365631  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:22.365694  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:22.398964  447486 cri.go:89] found id: ""
	I1030 19:47:22.398996  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.399008  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:22.399016  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:22.399092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:22.435132  447486 cri.go:89] found id: ""
	I1030 19:47:22.435169  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.435181  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:22.435188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:22.435252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:22.471510  447486 cri.go:89] found id: ""
	I1030 19:47:22.471544  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.471557  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:22.471574  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:22.471630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:22.509611  447486 cri.go:89] found id: ""
	I1030 19:47:22.509639  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.509647  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:22.509653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:22.509707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:22.546502  447486 cri.go:89] found id: ""
	I1030 19:47:22.546539  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.546552  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:22.546560  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:22.546630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:22.584560  447486 cri.go:89] found id: ""
	I1030 19:47:22.584593  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.584605  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:22.584613  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:22.584676  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:22.621421  447486 cri.go:89] found id: ""
	I1030 19:47:22.621461  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.621474  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:22.621486  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:22.621505  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:22.634998  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:22.635038  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:22.711002  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:22.711028  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:22.711047  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:22.790673  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:22.790712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.831804  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:22.831851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.386915  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:25.399854  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:25.399954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:25.438346  447486 cri.go:89] found id: ""
	I1030 19:47:25.438381  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.438406  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:25.438416  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:25.438500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:25.474888  447486 cri.go:89] found id: ""
	I1030 19:47:25.474915  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.474924  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:25.474931  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:25.474994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:25.511925  447486 cri.go:89] found id: ""
	I1030 19:47:25.511955  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.511966  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:25.511973  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:25.512038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:25.551027  447486 cri.go:89] found id: ""
	I1030 19:47:25.551058  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.551067  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:25.551073  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:25.551144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:25.584736  447486 cri.go:89] found id: ""
	I1030 19:47:25.584764  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.584773  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:25.584779  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:25.584847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:25.632765  447486 cri.go:89] found id: ""
	I1030 19:47:25.632798  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.632810  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:25.632818  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:25.632893  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:25.682501  447486 cri.go:89] found id: ""
	I1030 19:47:25.682528  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.682536  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:25.682543  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:25.682591  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:25.728306  447486 cri.go:89] found id: ""
	I1030 19:47:25.728340  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.728352  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:25.728365  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:25.728397  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.781908  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:25.781944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:25.795864  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:25.795899  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:25.868350  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:25.868378  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:25.868392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:25.944244  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:25.944277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:23.981016  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:25.982186  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.942113  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.438568  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.333623  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.334460  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:28.488216  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:28.501481  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:28.501558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:28.536808  447486 cri.go:89] found id: ""
	I1030 19:47:28.536838  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.536849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:28.536857  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:28.536923  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:28.571819  447486 cri.go:89] found id: ""
	I1030 19:47:28.571855  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.571867  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:28.571885  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:28.571966  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:28.605532  447486 cri.go:89] found id: ""
	I1030 19:47:28.605571  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.605582  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:28.605610  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:28.605682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:28.642108  447486 cri.go:89] found id: ""
	I1030 19:47:28.642140  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.642152  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:28.642159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:28.642234  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:28.680036  447486 cri.go:89] found id: ""
	I1030 19:47:28.680065  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.680078  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:28.680086  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:28.680151  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.716135  447486 cri.go:89] found id: ""
	I1030 19:47:28.716162  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.716171  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:28.716177  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:28.716238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:28.752364  447486 cri.go:89] found id: ""
	I1030 19:47:28.752398  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.752406  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:28.752413  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:28.752478  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:28.788396  447486 cri.go:89] found id: ""
	I1030 19:47:28.788434  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.788447  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:28.788461  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:28.788476  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:28.841560  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:28.841595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:28.856134  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:28.856178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:28.930463  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:28.930507  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:28.930527  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:29.013743  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:29.013795  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:31.557942  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:31.573562  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:31.573654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:31.625349  447486 cri.go:89] found id: ""
	I1030 19:47:31.625378  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.625386  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:31.625392  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:31.625442  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:31.689536  447486 cri.go:89] found id: ""
	I1030 19:47:31.689566  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.689574  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:31.689581  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:31.689632  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:31.723758  447486 cri.go:89] found id: ""
	I1030 19:47:31.723794  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.723806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:31.723814  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:31.723890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:31.762671  447486 cri.go:89] found id: ""
	I1030 19:47:31.762698  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.762707  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:31.762713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:31.762761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:31.797658  447486 cri.go:89] found id: ""
	I1030 19:47:31.797686  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.797694  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:31.797702  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:31.797792  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.481158  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:30.981477  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:32.981593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.940019  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.833540  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.334678  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.832186  447486 cri.go:89] found id: ""
	I1030 19:47:31.832217  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.832228  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:31.832236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:31.832298  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:31.866820  447486 cri.go:89] found id: ""
	I1030 19:47:31.866853  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.866866  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:31.866875  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:31.866937  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:31.901888  447486 cri.go:89] found id: ""
	I1030 19:47:31.901913  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.901922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:31.901932  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:31.901944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:31.992343  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:31.992380  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:32.030519  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:32.030559  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:32.084442  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:32.084478  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:32.098919  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:32.098954  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:32.171034  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:34.671243  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:34.685879  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:34.685972  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:34.720657  447486 cri.go:89] found id: ""
	I1030 19:47:34.720686  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.720694  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:34.720700  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:34.720757  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:34.759571  447486 cri.go:89] found id: ""
	I1030 19:47:34.759602  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.759615  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:34.759624  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:34.759685  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:34.795273  447486 cri.go:89] found id: ""
	I1030 19:47:34.795313  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.795322  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:34.795329  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:34.795450  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:34.828999  447486 cri.go:89] found id: ""
	I1030 19:47:34.829035  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.829047  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:34.829054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:34.829158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:34.865620  447486 cri.go:89] found id: ""
	I1030 19:47:34.865661  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.865674  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:34.865682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:34.865753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:34.900768  447486 cri.go:89] found id: ""
	I1030 19:47:34.900801  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.900812  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:34.900820  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:34.900889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:34.945023  447486 cri.go:89] found id: ""
	I1030 19:47:34.945048  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.945057  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:34.945063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:34.945118  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:34.980458  447486 cri.go:89] found id: ""
	I1030 19:47:34.980483  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.980492  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:34.980501  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:34.980514  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:35.052570  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:35.052597  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:35.052613  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:35.133825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:35.133869  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:35.176016  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:35.176063  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:35.228866  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:35.228903  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:34.982702  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.481103  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.438712  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.938856  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.837275  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:39.332612  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.743408  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:37.757472  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:37.757547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:37.794818  447486 cri.go:89] found id: ""
	I1030 19:47:37.794847  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.794856  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:37.794862  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:37.794928  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:37.830025  447486 cri.go:89] found id: ""
	I1030 19:47:37.830064  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.830077  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:37.830086  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:37.830150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:37.864862  447486 cri.go:89] found id: ""
	I1030 19:47:37.864893  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.864902  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:37.864908  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:37.864958  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:37.901650  447486 cri.go:89] found id: ""
	I1030 19:47:37.901699  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.901713  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:37.901722  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:37.901780  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:37.935824  447486 cri.go:89] found id: ""
	I1030 19:47:37.935854  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.935862  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:37.935868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:37.935930  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:37.972774  447486 cri.go:89] found id: ""
	I1030 19:47:37.972805  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.972813  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:37.972819  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:37.972868  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:38.007815  447486 cri.go:89] found id: ""
	I1030 19:47:38.007845  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.007856  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:38.007864  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:38.007931  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:38.042525  447486 cri.go:89] found id: ""
	I1030 19:47:38.042559  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.042571  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:38.042584  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:38.042600  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:38.122022  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:38.122048  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:38.122065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:38.200534  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:38.200575  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:38.240118  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:38.240154  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:38.291936  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:38.291976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:40.806105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:40.821268  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:40.821343  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:40.857151  447486 cri.go:89] found id: ""
	I1030 19:47:40.857186  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.857198  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:40.857207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:40.857266  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:40.893603  447486 cri.go:89] found id: ""
	I1030 19:47:40.893639  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.893648  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:40.893654  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:40.893720  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:40.935294  447486 cri.go:89] found id: ""
	I1030 19:47:40.935330  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.935342  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:40.935349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:40.935418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:40.971509  447486 cri.go:89] found id: ""
	I1030 19:47:40.971536  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.971544  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:40.971550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:40.971610  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:41.009895  447486 cri.go:89] found id: ""
	I1030 19:47:41.009932  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.009941  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:41.009948  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:41.010008  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:41.045170  447486 cri.go:89] found id: ""
	I1030 19:47:41.045208  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.045221  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:41.045229  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:41.045288  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:41.077654  447486 cri.go:89] found id: ""
	I1030 19:47:41.077684  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.077695  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:41.077704  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:41.077771  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:41.111509  447486 cri.go:89] found id: ""
	I1030 19:47:41.111543  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.111552  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:41.111562  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:41.111574  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:41.164939  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:41.164976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:41.178512  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:41.178589  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:41.258783  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:41.258813  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:41.258832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:41.338192  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:41.338230  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:39.481210  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.481439  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:38.938987  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:40.941386  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.333705  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.833502  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.878155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:43.892376  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:43.892452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:43.930556  447486 cri.go:89] found id: ""
	I1030 19:47:43.930594  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.930606  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:43.930614  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:43.930679  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:43.970588  447486 cri.go:89] found id: ""
	I1030 19:47:43.970619  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.970630  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:43.970638  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:43.970706  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:44.005467  447486 cri.go:89] found id: ""
	I1030 19:47:44.005497  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.005506  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:44.005512  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:44.005573  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:44.039126  447486 cri.go:89] found id: ""
	I1030 19:47:44.039164  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.039173  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:44.039179  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:44.039239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:44.072961  447486 cri.go:89] found id: ""
	I1030 19:47:44.072994  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.073006  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:44.073014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:44.073109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:44.105864  447486 cri.go:89] found id: ""
	I1030 19:47:44.105891  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.105900  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:44.105907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:44.105956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:44.138198  447486 cri.go:89] found id: ""
	I1030 19:47:44.138240  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.138250  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:44.138264  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:44.138331  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:44.172529  447486 cri.go:89] found id: ""
	I1030 19:47:44.172558  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.172567  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:44.172577  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:44.172594  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:44.248215  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:44.248254  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:44.286169  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:44.286202  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:44.341129  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:44.341167  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:44.354570  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:44.354597  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:44.427790  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:43.481483  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.482271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.981312  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.440759  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.938783  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.940512  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.332448  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:48.333216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.928728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:46.943068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:46.943154  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:46.978385  447486 cri.go:89] found id: ""
	I1030 19:47:46.978416  447486 logs.go:282] 0 containers: []
	W1030 19:47:46.978428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:46.978436  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:46.978522  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:47.020413  447486 cri.go:89] found id: ""
	I1030 19:47:47.020457  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.020469  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:47.020476  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:47.020547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:47.061492  447486 cri.go:89] found id: ""
	I1030 19:47:47.061526  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.061538  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:47.061547  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:47.061611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:47.097621  447486 cri.go:89] found id: ""
	I1030 19:47:47.097659  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.097670  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:47.097679  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:47.097739  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:47.131740  447486 cri.go:89] found id: ""
	I1030 19:47:47.131769  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.131779  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:47.131785  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:47.131856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:47.167623  447486 cri.go:89] found id: ""
	I1030 19:47:47.167661  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.167674  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:47.167682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:47.167746  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:47.202299  447486 cri.go:89] found id: ""
	I1030 19:47:47.202328  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.202337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:47.202344  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:47.202401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:47.236652  447486 cri.go:89] found id: ""
	I1030 19:47:47.236686  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.236695  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:47.236704  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:47.236716  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:47.289700  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:47.289740  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:47.304929  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:47.304964  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:47.374811  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:47.374842  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:47.374858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:47.449161  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:47.449196  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:49.989730  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:50.002741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:50.002821  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:50.037602  447486 cri.go:89] found id: ""
	I1030 19:47:50.037636  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.037647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:50.037655  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:50.037724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:50.071346  447486 cri.go:89] found id: ""
	I1030 19:47:50.071383  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.071395  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:50.071405  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:50.071473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:50.106657  447486 cri.go:89] found id: ""
	I1030 19:47:50.106698  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.106711  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:50.106719  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:50.106783  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:50.140974  447486 cri.go:89] found id: ""
	I1030 19:47:50.141012  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.141025  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:50.141032  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:50.141105  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:50.177715  447486 cri.go:89] found id: ""
	I1030 19:47:50.177748  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.177756  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:50.177763  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:50.177824  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:50.212234  447486 cri.go:89] found id: ""
	I1030 19:47:50.212263  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.212272  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:50.212278  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:50.212337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:50.250791  447486 cri.go:89] found id: ""
	I1030 19:47:50.250826  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.250835  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:50.250842  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:50.250908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:50.288575  447486 cri.go:89] found id: ""
	I1030 19:47:50.288607  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.288615  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:50.288628  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:50.288643  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:50.358015  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:50.358039  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:50.358054  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:50.433194  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:50.433235  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:50.473485  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:50.473519  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:50.523581  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:50.523618  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:49.981614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:51.982079  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.439717  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.940170  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.333498  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.832848  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:54.833689  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:53.038393  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:53.052835  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:53.052910  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:53.088797  447486 cri.go:89] found id: ""
	I1030 19:47:53.088828  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.088837  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:53.088843  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:53.088897  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:53.124627  447486 cri.go:89] found id: ""
	I1030 19:47:53.124659  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.124668  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:53.124674  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:53.124724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:53.159127  447486 cri.go:89] found id: ""
	I1030 19:47:53.159163  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.159175  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:53.159183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:53.159244  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:53.191770  447486 cri.go:89] found id: ""
	I1030 19:47:53.191801  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.191810  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:53.191817  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:53.191885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:53.227727  447486 cri.go:89] found id: ""
	I1030 19:47:53.227761  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.227774  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:53.227781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:53.227842  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:53.262937  447486 cri.go:89] found id: ""
	I1030 19:47:53.262969  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.262981  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:53.262989  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:53.263060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:53.296070  447486 cri.go:89] found id: ""
	I1030 19:47:53.296113  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.296124  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:53.296133  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:53.296197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:53.332628  447486 cri.go:89] found id: ""
	I1030 19:47:53.332663  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.332674  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:53.332687  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:53.332702  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:53.385004  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:53.385046  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.400139  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:53.400185  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:53.477792  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:53.477826  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:53.477858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:53.553145  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:53.553186  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:56.094454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:56.107827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:56.107900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:56.141701  447486 cri.go:89] found id: ""
	I1030 19:47:56.141739  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.141751  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:56.141763  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:56.141831  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:56.179973  447486 cri.go:89] found id: ""
	I1030 19:47:56.180003  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.180016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:56.180023  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:56.180099  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:56.220456  447486 cri.go:89] found id: ""
	I1030 19:47:56.220486  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.220496  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:56.220503  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:56.220578  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:56.259699  447486 cri.go:89] found id: ""
	I1030 19:47:56.259727  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.259736  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:56.259741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:56.259791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:56.302726  447486 cri.go:89] found id: ""
	I1030 19:47:56.302762  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.302775  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:56.302783  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:56.302850  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:56.339791  447486 cri.go:89] found id: ""
	I1030 19:47:56.339819  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.339828  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:56.339834  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:56.339889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:56.381291  447486 cri.go:89] found id: ""
	I1030 19:47:56.381325  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.381337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:56.381345  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:56.381401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:56.417150  447486 cri.go:89] found id: ""
	I1030 19:47:56.417182  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.417194  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:56.417207  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:56.417227  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:56.466963  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:56.467005  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:56.481528  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:56.481557  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:56.554843  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:56.554872  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:56.554887  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:56.635798  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:56.635846  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:54.480601  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:56.481475  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:55.439618  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.940438  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.337314  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.179829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:59.193083  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:59.193160  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:59.231253  447486 cri.go:89] found id: ""
	I1030 19:47:59.231288  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.231302  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:59.231311  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:59.231382  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:59.265982  447486 cri.go:89] found id: ""
	I1030 19:47:59.266013  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.266022  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:59.266028  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:59.266090  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:59.303724  447486 cri.go:89] found id: ""
	I1030 19:47:59.303761  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.303773  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:59.303781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:59.303848  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:59.342137  447486 cri.go:89] found id: ""
	I1030 19:47:59.342163  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.342172  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:59.342180  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:59.342246  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:59.382652  447486 cri.go:89] found id: ""
	I1030 19:47:59.382684  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.382693  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:59.382700  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:59.382761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:59.422428  447486 cri.go:89] found id: ""
	I1030 19:47:59.422454  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.422463  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:59.422469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:59.422539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:59.464047  447486 cri.go:89] found id: ""
	I1030 19:47:59.464079  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.464089  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:59.464095  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:59.464146  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:59.500658  447486 cri.go:89] found id: ""
	I1030 19:47:59.500693  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.500705  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:59.500716  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:59.500732  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:59.554634  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:59.554679  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:59.567956  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:59.567986  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:59.646305  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:59.646332  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:59.646349  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:59.730008  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:59.730052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:58.486516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.982184  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.439220  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.439945  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:01.832883  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:03.834027  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.274141  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:02.287246  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:02.287320  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:02.322166  447486 cri.go:89] found id: ""
	I1030 19:48:02.322320  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.322336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:02.322346  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:02.322421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:02.358101  447486 cri.go:89] found id: ""
	I1030 19:48:02.358131  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.358140  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:02.358146  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:02.358209  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:02.394812  447486 cri.go:89] found id: ""
	I1030 19:48:02.394898  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.394915  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:02.394924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:02.394990  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:02.429128  447486 cri.go:89] found id: ""
	I1030 19:48:02.429165  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.429177  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:02.429186  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:02.429358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:02.465878  447486 cri.go:89] found id: ""
	I1030 19:48:02.465907  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.465915  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:02.465921  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:02.465973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:02.502758  447486 cri.go:89] found id: ""
	I1030 19:48:02.502794  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.502805  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:02.502813  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:02.502879  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:02.540111  447486 cri.go:89] found id: ""
	I1030 19:48:02.540142  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.540152  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:02.540158  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:02.540222  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:02.574728  447486 cri.go:89] found id: ""
	I1030 19:48:02.574762  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.574774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:02.574787  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:02.574804  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.613333  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:02.613374  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:02.664970  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:02.665013  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:02.679594  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:02.679626  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:02.744184  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:02.744208  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:02.744222  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.326826  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:05.340166  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:05.340232  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:05.376742  447486 cri.go:89] found id: ""
	I1030 19:48:05.376774  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.376789  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:05.376795  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:05.376865  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:05.413981  447486 cri.go:89] found id: ""
	I1030 19:48:05.414026  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.414039  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:05.414047  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:05.414121  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:05.449811  447486 cri.go:89] found id: ""
	I1030 19:48:05.449842  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.449854  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:05.449862  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:05.449925  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:05.502576  447486 cri.go:89] found id: ""
	I1030 19:48:05.502610  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.502622  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:05.502630  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:05.502721  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:05.536747  447486 cri.go:89] found id: ""
	I1030 19:48:05.536778  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.536787  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:05.536793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:05.536857  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:05.570308  447486 cri.go:89] found id: ""
	I1030 19:48:05.570335  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.570344  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:05.570353  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:05.570420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:05.605006  447486 cri.go:89] found id: ""
	I1030 19:48:05.605037  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.605048  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:05.605054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:05.605109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:05.638651  447486 cri.go:89] found id: ""
	I1030 19:48:05.638681  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.638693  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:05.638705  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:05.638720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:05.690734  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:05.690769  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:05.704561  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:05.704588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:05.779426  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:05.779448  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:05.779471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.866320  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:05.866355  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:03.481614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:05.482428  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.981875  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:04.939485  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.438925  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:06.334094  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.834525  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.409454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:08.423687  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:08.423767  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:08.463554  447486 cri.go:89] found id: ""
	I1030 19:48:08.463581  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.463591  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:08.463597  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:08.463654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:08.500159  447486 cri.go:89] found id: ""
	I1030 19:48:08.500186  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.500195  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:08.500200  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:08.500253  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:08.535670  447486 cri.go:89] found id: ""
	I1030 19:48:08.535701  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.535710  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:08.535717  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:08.535785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:08.572921  447486 cri.go:89] found id: ""
	I1030 19:48:08.572958  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.572968  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:08.572975  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:08.573052  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:08.610873  447486 cri.go:89] found id: ""
	I1030 19:48:08.610908  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.610918  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:08.610924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:08.610978  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:08.645430  447486 cri.go:89] found id: ""
	I1030 19:48:08.645458  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.645466  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:08.645475  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:08.645528  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:08.681212  447486 cri.go:89] found id: ""
	I1030 19:48:08.681246  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.681258  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:08.681266  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:08.681332  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:08.716619  447486 cri.go:89] found id: ""
	I1030 19:48:08.716651  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.716661  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:08.716671  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:08.716682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:08.794090  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:08.794134  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.833209  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:08.833251  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:08.884781  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:08.884817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:08.898556  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:08.898586  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:08.967713  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.468230  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:11.482593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:11.482660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:11.518191  447486 cri.go:89] found id: ""
	I1030 19:48:11.518225  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.518235  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:11.518242  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:11.518295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:11.557199  447486 cri.go:89] found id: ""
	I1030 19:48:11.557229  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.557237  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:11.557252  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:11.557323  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:11.595605  447486 cri.go:89] found id: ""
	I1030 19:48:11.595638  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.595650  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:11.595664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:11.595732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:11.634253  447486 cri.go:89] found id: ""
	I1030 19:48:11.634281  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.634295  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:11.634301  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:11.634358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:11.671138  447486 cri.go:89] found id: ""
	I1030 19:48:11.671167  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.671176  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:11.671183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:11.671238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:11.707202  447486 cri.go:89] found id: ""
	I1030 19:48:11.707228  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.707237  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:11.707243  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:11.707302  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:11.745514  447486 cri.go:89] found id: ""
	I1030 19:48:11.745549  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.745561  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:11.745570  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:11.745640  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:11.781403  447486 cri.go:89] found id: ""
	I1030 19:48:11.781438  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.781449  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:11.781458  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:11.781471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:10.486349  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:12.980881  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:09.440261  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.938439  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.332911  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.334382  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.832934  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:11.832972  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:11.853498  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:11.853545  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:11.949365  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.949389  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:11.949405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:12.033776  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:12.033823  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.579536  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:14.593497  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:14.593579  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:14.627853  447486 cri.go:89] found id: ""
	I1030 19:48:14.627886  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.627895  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:14.627902  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:14.627953  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:14.662356  447486 cri.go:89] found id: ""
	I1030 19:48:14.662386  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.662398  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:14.662406  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:14.662481  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:14.699334  447486 cri.go:89] found id: ""
	I1030 19:48:14.699370  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.699382  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:14.699390  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:14.699457  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:14.733884  447486 cri.go:89] found id: ""
	I1030 19:48:14.733924  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.733937  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:14.733946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:14.734025  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:14.775208  447486 cri.go:89] found id: ""
	I1030 19:48:14.775240  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.775249  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:14.775256  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:14.775315  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:14.809663  447486 cri.go:89] found id: ""
	I1030 19:48:14.809695  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.809704  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:14.809711  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:14.809778  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:14.844963  447486 cri.go:89] found id: ""
	I1030 19:48:14.844996  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.845006  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:14.845014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:14.845084  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:14.881236  447486 cri.go:89] found id: ""
	I1030 19:48:14.881273  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.881283  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:14.881293  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:14.881305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:14.933792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:14.933830  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:14.948038  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:14.948065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:15.023497  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:15.023519  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:15.023532  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:15.105682  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:15.105741  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.980949  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.981063  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.940399  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.438545  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:15.834158  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.332452  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:17.646238  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:17.665366  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:17.665455  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:17.707729  447486 cri.go:89] found id: ""
	I1030 19:48:17.707783  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.707796  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:17.707805  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:17.707883  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:17.759922  447486 cri.go:89] found id: ""
	I1030 19:48:17.759959  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.759972  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:17.759980  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:17.760049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:17.807635  447486 cri.go:89] found id: ""
	I1030 19:48:17.807671  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.807683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:17.807695  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:17.807770  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:17.844205  447486 cri.go:89] found id: ""
	I1030 19:48:17.844236  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.844247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:17.844255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:17.844326  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:17.879079  447486 cri.go:89] found id: ""
	I1030 19:48:17.879113  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.879125  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:17.879134  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:17.879202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:17.916548  447486 cri.go:89] found id: ""
	I1030 19:48:17.916584  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.916594  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:17.916601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:17.916654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:17.950597  447486 cri.go:89] found id: ""
	I1030 19:48:17.950626  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.950635  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:17.950640  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:17.950695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:17.985924  447486 cri.go:89] found id: ""
	I1030 19:48:17.985957  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.985968  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:17.985980  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:17.985996  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:18.066211  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:18.066250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:18.107228  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:18.107279  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:18.157508  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:18.157543  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.172208  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:18.172243  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:18.248100  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:20.748681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:20.763369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:20.763445  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:20.804288  447486 cri.go:89] found id: ""
	I1030 19:48:20.804323  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.804336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:20.804343  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:20.804410  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:20.838925  447486 cri.go:89] found id: ""
	I1030 19:48:20.838964  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.838973  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:20.838979  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:20.839030  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:20.873560  447486 cri.go:89] found id: ""
	I1030 19:48:20.873596  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.873608  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:20.873617  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:20.873681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:20.908670  447486 cri.go:89] found id: ""
	I1030 19:48:20.908705  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.908716  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:20.908723  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:20.908791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:20.945901  447486 cri.go:89] found id: ""
	I1030 19:48:20.945929  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.945937  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:20.945943  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:20.945991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:20.980184  447486 cri.go:89] found id: ""
	I1030 19:48:20.980216  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.980227  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:20.980236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:20.980299  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:21.024243  447486 cri.go:89] found id: ""
	I1030 19:48:21.024272  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.024284  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:21.024293  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:21.024366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:21.063315  447486 cri.go:89] found id: ""
	I1030 19:48:21.063348  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.063358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:21.063370  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:21.063387  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:21.130434  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:21.130463  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:21.130480  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:21.209067  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:21.209107  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:21.251005  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:21.251035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:21.303365  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:21.303402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.981952  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.982372  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.439921  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.939869  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.940058  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.333700  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.833845  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.834560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:23.817700  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:23.831060  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:23.831133  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:23.864299  447486 cri.go:89] found id: ""
	I1030 19:48:23.864334  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.864346  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:23.864354  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:23.864420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:23.900815  447486 cri.go:89] found id: ""
	I1030 19:48:23.900844  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.900854  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:23.900869  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:23.900929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:23.939888  447486 cri.go:89] found id: ""
	I1030 19:48:23.939917  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.939928  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:23.939936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:23.939999  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:23.975359  447486 cri.go:89] found id: ""
	I1030 19:48:23.975387  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.975395  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:23.975401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:23.975452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:24.012779  447486 cri.go:89] found id: ""
	I1030 19:48:24.012819  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.012832  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:24.012840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:24.012908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:24.048853  447486 cri.go:89] found id: ""
	I1030 19:48:24.048890  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.048903  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:24.048912  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:24.048979  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:24.084744  447486 cri.go:89] found id: ""
	I1030 19:48:24.084784  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.084797  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:24.084806  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:24.084860  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:24.121719  447486 cri.go:89] found id: ""
	I1030 19:48:24.121757  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.121767  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:24.121777  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:24.121791  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:24.178691  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:24.178733  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:24.192885  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:24.192916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:24.268771  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:24.268815  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:24.268832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:24.349663  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:24.349699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:23.481516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:25.481700  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.481886  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.940106  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.940309  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.334165  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.834162  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.887325  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:26.900480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:26.900558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:26.936157  447486 cri.go:89] found id: ""
	I1030 19:48:26.936188  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.936200  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:26.936207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:26.936278  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:26.975580  447486 cri.go:89] found id: ""
	I1030 19:48:26.975615  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.975626  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:26.975633  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:26.975705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:27.010549  447486 cri.go:89] found id: ""
	I1030 19:48:27.010579  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.010592  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:27.010600  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:27.010659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:27.047505  447486 cri.go:89] found id: ""
	I1030 19:48:27.047541  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.047553  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:27.047561  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:27.047628  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:27.083379  447486 cri.go:89] found id: ""
	I1030 19:48:27.083409  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.083420  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:27.083429  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:27.083492  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:27.117912  447486 cri.go:89] found id: ""
	I1030 19:48:27.117954  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.117967  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:27.117976  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:27.118049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:27.151721  447486 cri.go:89] found id: ""
	I1030 19:48:27.151749  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.151758  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:27.151765  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:27.151817  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:27.188940  447486 cri.go:89] found id: ""
	I1030 19:48:27.188981  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.188989  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:27.188999  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:27.189011  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:27.243926  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:27.243960  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:27.258702  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:27.258731  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:27.326983  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:27.327023  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:27.327041  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:27.410761  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:27.410808  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.953219  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:29.967972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:29.968078  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:30.003975  447486 cri.go:89] found id: ""
	I1030 19:48:30.004004  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.004014  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:30.004023  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:30.004097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:30.041732  447486 cri.go:89] found id: ""
	I1030 19:48:30.041768  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.041780  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:30.041787  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:30.041863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:30.078262  447486 cri.go:89] found id: ""
	I1030 19:48:30.078297  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.078308  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:30.078315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:30.078379  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:30.116100  447486 cri.go:89] found id: ""
	I1030 19:48:30.116137  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.116149  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:30.116157  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:30.116229  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:30.150925  447486 cri.go:89] found id: ""
	I1030 19:48:30.150953  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.150964  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:30.150972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:30.151041  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:30.192188  447486 cri.go:89] found id: ""
	I1030 19:48:30.192219  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.192230  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:30.192237  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:30.192314  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:30.231144  447486 cri.go:89] found id: ""
	I1030 19:48:30.231180  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.231192  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:30.231200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:30.231277  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:30.271198  447486 cri.go:89] found id: ""
	I1030 19:48:30.271228  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.271242  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:30.271265  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:30.271277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:30.322750  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:30.322792  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:30.337745  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:30.337774  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:30.417198  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:30.417224  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:30.417240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:30.503327  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:30.503364  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.982893  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.482051  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.440509  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:31.939517  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.333571  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.833482  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:33.047719  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:33.062330  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:33.062395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:33.101049  447486 cri.go:89] found id: ""
	I1030 19:48:33.101088  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.101101  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:33.101108  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:33.101175  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:33.135236  447486 cri.go:89] found id: ""
	I1030 19:48:33.135268  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.135279  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:33.135286  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:33.135357  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:33.169279  447486 cri.go:89] found id: ""
	I1030 19:48:33.169314  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.169325  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:33.169333  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:33.169401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:33.203336  447486 cri.go:89] found id: ""
	I1030 19:48:33.203380  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.203392  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:33.203401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:33.203470  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:33.238223  447486 cri.go:89] found id: ""
	I1030 19:48:33.238258  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.238270  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:33.238279  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:33.238345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:33.272891  447486 cri.go:89] found id: ""
	I1030 19:48:33.272925  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.272937  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:33.272946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:33.273014  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:33.312452  447486 cri.go:89] found id: ""
	I1030 19:48:33.312480  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.312489  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:33.312496  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:33.312547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:33.349041  447486 cri.go:89] found id: ""
	I1030 19:48:33.349076  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.349091  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:33.349104  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:33.349130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:33.430888  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:33.430940  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.469414  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:33.469444  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:33.518989  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:33.519022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:33.532656  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:33.532690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:33.605896  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.106207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:36.120564  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:36.120646  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:36.156854  447486 cri.go:89] found id: ""
	I1030 19:48:36.156887  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.156900  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:36.156909  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:36.156988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:36.195027  447486 cri.go:89] found id: ""
	I1030 19:48:36.195059  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.195072  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:36.195080  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:36.195150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:36.235639  447486 cri.go:89] found id: ""
	I1030 19:48:36.235672  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.235683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:36.235692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:36.235758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:36.281659  447486 cri.go:89] found id: ""
	I1030 19:48:36.281693  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.281702  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:36.281709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:36.281762  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:36.315427  447486 cri.go:89] found id: ""
	I1030 19:48:36.315454  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.315463  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:36.315469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:36.315531  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:36.353084  447486 cri.go:89] found id: ""
	I1030 19:48:36.353110  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.353120  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:36.353126  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:36.353197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:36.388497  447486 cri.go:89] found id: ""
	I1030 19:48:36.388533  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.388545  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:36.388553  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:36.388616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:36.423625  447486 cri.go:89] found id: ""
	I1030 19:48:36.423658  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.423667  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:36.423676  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:36.423691  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:36.476722  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:36.476757  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:36.490669  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:36.490700  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:36.558587  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.558621  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:36.558639  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:36.635606  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:36.635654  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:34.482414  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.981552  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.439796  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.938335  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:37.333231  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.333707  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.174007  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:39.187709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:39.187786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:39.226131  447486 cri.go:89] found id: ""
	I1030 19:48:39.226165  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.226177  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:39.226185  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:39.226265  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:39.265963  447486 cri.go:89] found id: ""
	I1030 19:48:39.266003  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.266016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:39.266024  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:39.266092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:39.302586  447486 cri.go:89] found id: ""
	I1030 19:48:39.302624  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.302637  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:39.302645  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:39.302710  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:39.347869  447486 cri.go:89] found id: ""
	I1030 19:48:39.347903  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.347916  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:39.347924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:39.347994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:39.384252  447486 cri.go:89] found id: ""
	I1030 19:48:39.384280  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.384288  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:39.384294  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:39.384347  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:39.418847  447486 cri.go:89] found id: ""
	I1030 19:48:39.418876  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.418885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:39.418891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:39.418950  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:39.458408  447486 cri.go:89] found id: ""
	I1030 19:48:39.458454  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.458467  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:39.458480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:39.458567  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:39.493889  447486 cri.go:89] found id: ""
	I1030 19:48:39.493923  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.493934  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:39.493946  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:39.493959  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:39.548692  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:39.548746  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:39.562083  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:39.562110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:39.633822  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:39.633845  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:39.633857  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:39.711765  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:39.711814  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:39.482010  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.981380  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:38.939254  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:40.940318  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.832456  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.832780  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:42.254337  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:42.268137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:42.268202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:42.303383  447486 cri.go:89] found id: ""
	I1030 19:48:42.303418  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.303428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:42.303434  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:42.303501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:42.349405  447486 cri.go:89] found id: ""
	I1030 19:48:42.349437  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.349447  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:42.349453  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:42.349504  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:42.384317  447486 cri.go:89] found id: ""
	I1030 19:48:42.384353  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.384363  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:42.384369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:42.384424  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:42.418712  447486 cri.go:89] found id: ""
	I1030 19:48:42.418759  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.418768  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:42.418775  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:42.418833  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:42.454234  447486 cri.go:89] found id: ""
	I1030 19:48:42.454270  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.454280  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:42.454288  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:42.454362  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:42.488813  447486 cri.go:89] found id: ""
	I1030 19:48:42.488845  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.488855  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:42.488863  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:42.488929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:42.525883  447486 cri.go:89] found id: ""
	I1030 19:48:42.525917  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.525929  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:42.525938  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:42.526006  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:42.561197  447486 cri.go:89] found id: ""
	I1030 19:48:42.561233  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.561246  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:42.561259  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:42.561275  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.599818  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:42.599854  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:42.654341  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:42.654382  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:42.668163  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:42.668188  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:42.739630  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:42.739659  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:42.739671  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.316154  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:45.330372  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:45.330454  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:45.369093  447486 cri.go:89] found id: ""
	I1030 19:48:45.369125  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.369135  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:45.369141  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:45.369192  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:45.407681  447486 cri.go:89] found id: ""
	I1030 19:48:45.407715  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.407726  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:45.407732  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:45.407787  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:45.444445  447486 cri.go:89] found id: ""
	I1030 19:48:45.444474  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.444482  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:45.444488  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:45.444539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:45.481538  447486 cri.go:89] found id: ""
	I1030 19:48:45.481570  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.481583  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:45.481591  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:45.481654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:45.515088  447486 cri.go:89] found id: ""
	I1030 19:48:45.515123  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.515132  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:45.515139  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:45.515195  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:45.550085  447486 cri.go:89] found id: ""
	I1030 19:48:45.550133  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.550145  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:45.550152  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:45.550214  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:45.583950  447486 cri.go:89] found id: ""
	I1030 19:48:45.583985  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.583999  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:45.584008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:45.584082  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:45.617320  447486 cri.go:89] found id: ""
	I1030 19:48:45.617349  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.617358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:45.617369  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:45.617389  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:45.668792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:45.668833  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:45.683144  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:45.683178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:45.758707  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:45.758732  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:45.758744  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.833807  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:45.833837  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:43.982806  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:46.480452  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.440702  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.938267  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:47.938396  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.833319  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.332420  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.374096  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:48.387812  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:48.387903  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:48.426958  447486 cri.go:89] found id: ""
	I1030 19:48:48.426987  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.426996  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:48.427002  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:48.427051  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:48.462216  447486 cri.go:89] found id: ""
	I1030 19:48:48.462249  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.462260  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:48.462268  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:48.462336  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:48.495666  447486 cri.go:89] found id: ""
	I1030 19:48:48.495699  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.495709  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:48.495716  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:48.495798  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:48.530653  447486 cri.go:89] found id: ""
	I1030 19:48:48.530686  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.530698  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:48.530709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:48.530777  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:48.564788  447486 cri.go:89] found id: ""
	I1030 19:48:48.564826  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.564838  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:48.564846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:48.564921  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:48.600735  447486 cri.go:89] found id: ""
	I1030 19:48:48.600772  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.600784  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:48.600793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:48.600863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:48.637063  447486 cri.go:89] found id: ""
	I1030 19:48:48.637095  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.637107  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:48.637115  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:48.637182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:48.673279  447486 cri.go:89] found id: ""
	I1030 19:48:48.673314  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.673334  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:48.673347  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:48.673362  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:48.724239  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:48.724280  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:48.738390  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:48.738425  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:48.812130  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:48.812155  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:48.812171  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:48.896253  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:48.896298  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.441155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:51.454675  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:51.454751  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:51.490464  447486 cri.go:89] found id: ""
	I1030 19:48:51.490511  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.490523  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:51.490532  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:51.490600  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:51.525364  447486 cri.go:89] found id: ""
	I1030 19:48:51.525399  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.525411  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:51.525419  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:51.525485  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:51.559028  447486 cri.go:89] found id: ""
	I1030 19:48:51.559062  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.559071  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:51.559078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:51.559139  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:51.595188  447486 cri.go:89] found id: ""
	I1030 19:48:51.595217  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.595225  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:51.595231  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:51.595300  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:51.628987  447486 cri.go:89] found id: ""
	I1030 19:48:51.629023  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.629039  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:51.629047  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:51.629119  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:51.663257  447486 cri.go:89] found id: ""
	I1030 19:48:51.663286  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.663295  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:51.663303  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:51.663368  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:51.712562  447486 cri.go:89] found id: ""
	I1030 19:48:51.712600  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.712613  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:51.712622  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:51.712684  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:51.761730  447486 cri.go:89] found id: ""
	I1030 19:48:51.761760  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.761769  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:51.761779  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:51.761794  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:51.775595  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:51.775624  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:48:48.481851  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.980723  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.982177  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:49.939273  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:51.939972  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.333451  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.333773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:54.835087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:48:51.849120  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:51.849144  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:51.849157  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:51.931364  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:51.931403  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.971195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:51.971229  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:54.525136  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:54.539137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:54.539227  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:54.574281  447486 cri.go:89] found id: ""
	I1030 19:48:54.574316  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.574339  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:54.574348  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:54.574420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:54.611109  447486 cri.go:89] found id: ""
	I1030 19:48:54.611149  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.611161  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:54.611170  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:54.611230  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:54.648396  447486 cri.go:89] found id: ""
	I1030 19:48:54.648428  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.648439  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:54.648447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:54.648510  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:54.683834  447486 cri.go:89] found id: ""
	I1030 19:48:54.683871  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.683884  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:54.683892  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:54.683954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:54.717391  447486 cri.go:89] found id: ""
	I1030 19:48:54.717421  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.717430  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:54.717436  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:54.717495  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:54.753783  447486 cri.go:89] found id: ""
	I1030 19:48:54.753812  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.753821  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:54.753827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:54.753878  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:54.788231  447486 cri.go:89] found id: ""
	I1030 19:48:54.788270  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.788282  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:54.788291  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:54.788359  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:54.823949  447486 cri.go:89] found id: ""
	I1030 19:48:54.823989  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.824001  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:54.824014  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:54.824052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:54.838936  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:54.838967  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:54.911785  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:54.911812  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:54.911825  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:54.993268  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:54.993302  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:55.032557  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:55.032588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:55.481330  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.482183  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:53.940343  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:56.439870  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.333262  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:59.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.588726  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:57.603010  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:57.603085  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:57.636499  447486 cri.go:89] found id: ""
	I1030 19:48:57.636531  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.636542  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:57.636551  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:57.636624  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:57.671698  447486 cri.go:89] found id: ""
	I1030 19:48:57.671728  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.671739  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:57.671748  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:57.671815  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:57.707387  447486 cri.go:89] found id: ""
	I1030 19:48:57.707414  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.707422  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:57.707431  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:57.707482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:57.745404  447486 cri.go:89] found id: ""
	I1030 19:48:57.745432  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.745440  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:57.745447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:57.745507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:57.784874  447486 cri.go:89] found id: ""
	I1030 19:48:57.784903  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.784912  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:57.784919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:57.784984  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:57.824663  447486 cri.go:89] found id: ""
	I1030 19:48:57.824697  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.824707  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:57.824713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:57.824773  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:57.862542  447486 cri.go:89] found id: ""
	I1030 19:48:57.862581  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.862593  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:57.862601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:57.862669  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:57.897901  447486 cri.go:89] found id: ""
	I1030 19:48:57.897935  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.897947  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:57.897959  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:57.897974  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.951898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:57.951936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:57.966282  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:57.966327  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:58.035515  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:58.035546  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:58.035562  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:58.114825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:58.114876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:00.705537  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:00.719589  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:00.719672  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:00.762299  447486 cri.go:89] found id: ""
	I1030 19:49:00.762330  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.762338  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:00.762356  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:00.762438  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:00.802228  447486 cri.go:89] found id: ""
	I1030 19:49:00.802259  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.802268  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:00.802275  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:00.802345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:00.836531  447486 cri.go:89] found id: ""
	I1030 19:49:00.836557  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.836565  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:00.836572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:00.836630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:00.869332  447486 cri.go:89] found id: ""
	I1030 19:49:00.869360  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.869369  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:00.869375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:00.869437  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:00.904643  447486 cri.go:89] found id: ""
	I1030 19:49:00.904675  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.904684  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:00.904691  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:00.904768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:00.939020  447486 cri.go:89] found id: ""
	I1030 19:49:00.939050  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.939061  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:00.939068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:00.939142  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:00.974586  447486 cri.go:89] found id: ""
	I1030 19:49:00.974625  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.974638  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:00.974646  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:00.974707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:01.009337  447486 cri.go:89] found id: ""
	I1030 19:49:01.009375  447486 logs.go:282] 0 containers: []
	W1030 19:49:01.009386  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:01.009399  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:01.009416  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:01.067087  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:01.067125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:01.081681  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:01.081713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:01.153057  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:01.153082  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:01.153096  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:01.236113  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:01.236153  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:59.981252  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.981799  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:58.938430  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:00.940905  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.333854  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.334325  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.774056  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:03.788395  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:03.788482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:03.823847  447486 cri.go:89] found id: ""
	I1030 19:49:03.823880  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.823892  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:03.823900  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:03.823973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:03.864776  447486 cri.go:89] found id: ""
	I1030 19:49:03.864807  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.864819  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:03.864827  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:03.864890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:03.912516  447486 cri.go:89] found id: ""
	I1030 19:49:03.912572  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.912585  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:03.912593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:03.912660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:03.962459  447486 cri.go:89] found id: ""
	I1030 19:49:03.962509  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.962521  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:03.962530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:03.962602  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:04.019107  447486 cri.go:89] found id: ""
	I1030 19:49:04.019143  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.019152  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:04.019159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:04.019217  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:04.054016  447486 cri.go:89] found id: ""
	I1030 19:49:04.054047  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.054056  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:04.054063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:04.054140  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:04.089907  447486 cri.go:89] found id: ""
	I1030 19:49:04.089938  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.089948  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:04.089955  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:04.090007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:04.128081  447486 cri.go:89] found id: ""
	I1030 19:49:04.128110  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.128118  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:04.128128  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:04.128142  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:04.182419  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:04.182462  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:04.196909  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:04.196941  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:04.267267  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:04.267298  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:04.267317  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:04.346826  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:04.346876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:03.984259  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.481362  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.438786  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.938707  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.939642  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.334541  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.834233  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.887266  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:06.902462  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:06.902554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:06.938850  447486 cri.go:89] found id: ""
	I1030 19:49:06.938880  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.938891  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:06.938899  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:06.938961  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:06.983284  447486 cri.go:89] found id: ""
	I1030 19:49:06.983315  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.983330  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:06.983339  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:06.983406  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:07.016332  447486 cri.go:89] found id: ""
	I1030 19:49:07.016359  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.016369  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:07.016375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:07.016428  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:07.051425  447486 cri.go:89] found id: ""
	I1030 19:49:07.051459  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.051471  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:07.051480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:07.051550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:07.083396  447486 cri.go:89] found id: ""
	I1030 19:49:07.083429  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.083437  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:07.083444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:07.083507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:07.116616  447486 cri.go:89] found id: ""
	I1030 19:49:07.116646  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.116654  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:07.116661  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:07.116728  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:07.149219  447486 cri.go:89] found id: ""
	I1030 19:49:07.149251  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.149259  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:07.149265  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:07.149318  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:07.188404  447486 cri.go:89] found id: ""
	I1030 19:49:07.188435  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.188444  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:07.188454  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:07.188468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:07.247600  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:07.247640  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:07.262196  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:07.262231  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:07.332998  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:07.333031  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:07.333048  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:07.415322  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:07.415367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:09.958278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:09.972983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:09.973068  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:10.016768  447486 cri.go:89] found id: ""
	I1030 19:49:10.016801  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.016810  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:10.016818  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:10.016885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:10.052958  447486 cri.go:89] found id: ""
	I1030 19:49:10.052992  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.053002  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:10.053009  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:10.053063  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:10.089062  447486 cri.go:89] found id: ""
	I1030 19:49:10.089094  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.089105  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:10.089120  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:10.089196  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:10.126084  447486 cri.go:89] found id: ""
	I1030 19:49:10.126114  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.126123  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:10.126130  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:10.126182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:10.171670  447486 cri.go:89] found id: ""
	I1030 19:49:10.171702  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.171712  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:10.171720  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:10.171785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:10.210243  447486 cri.go:89] found id: ""
	I1030 19:49:10.210285  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.210293  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:10.210300  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:10.210366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:10.253012  447486 cri.go:89] found id: ""
	I1030 19:49:10.253056  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.253069  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:10.253078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:10.253155  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:10.287948  447486 cri.go:89] found id: ""
	I1030 19:49:10.287999  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.288009  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:10.288021  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:10.288036  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:10.341362  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:10.341405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:10.355769  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:10.355798  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:10.429469  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:10.429500  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:10.429518  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:10.509812  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:10.509851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:08.488059  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.981606  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.982128  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.438903  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.939592  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.334087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.336238  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:14.833365  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:13.053064  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:13.069063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:13.069136  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:13.108457  447486 cri.go:89] found id: ""
	I1030 19:49:13.108492  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.108505  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:13.108513  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:13.108582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:13.146481  447486 cri.go:89] found id: ""
	I1030 19:49:13.146523  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.146534  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:13.146542  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:13.146595  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:13.187088  447486 cri.go:89] found id: ""
	I1030 19:49:13.187118  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.187129  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:13.187137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:13.187200  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:13.226913  447486 cri.go:89] found id: ""
	I1030 19:49:13.226948  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.226960  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:13.226968  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:13.227038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:13.262632  447486 cri.go:89] found id: ""
	I1030 19:49:13.262661  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.262669  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:13.262676  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:13.262726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:13.296877  447486 cri.go:89] found id: ""
	I1030 19:49:13.296906  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.296915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:13.296922  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:13.296983  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:13.334907  447486 cri.go:89] found id: ""
	I1030 19:49:13.334939  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.334949  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:13.334956  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:13.335021  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:13.369386  447486 cri.go:89] found id: ""
	I1030 19:49:13.369430  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.369443  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:13.369456  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:13.369472  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:13.423095  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:13.423130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:13.437039  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:13.437067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:13.512619  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:13.512648  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:13.512663  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:13.596982  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:13.597023  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:16.135623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:16.150407  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:16.150502  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:16.188771  447486 cri.go:89] found id: ""
	I1030 19:49:16.188811  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.188823  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:16.188832  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:16.188907  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:16.221554  447486 cri.go:89] found id: ""
	I1030 19:49:16.221589  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.221598  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:16.221604  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:16.221655  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:16.255567  447486 cri.go:89] found id: ""
	I1030 19:49:16.255595  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.255609  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:16.255616  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:16.255667  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:16.289820  447486 cri.go:89] found id: ""
	I1030 19:49:16.289855  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.289866  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:16.289874  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:16.289935  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:16.324415  447486 cri.go:89] found id: ""
	I1030 19:49:16.324449  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.324464  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:16.324471  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:16.324533  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:16.360789  447486 cri.go:89] found id: ""
	I1030 19:49:16.360825  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.360848  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:16.360856  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:16.360922  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:16.395066  447486 cri.go:89] found id: ""
	I1030 19:49:16.395093  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.395101  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:16.395107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:16.395158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:16.429220  447486 cri.go:89] found id: ""
	I1030 19:49:16.429261  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.429273  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:16.429286  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:16.429305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:16.481209  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:16.481250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:16.495353  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:16.495383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:16.563979  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:16.564006  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:16.564022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:16.645166  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:16.645205  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:15.481438  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.482846  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:15.440389  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.938724  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:16.833433  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.335773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.185478  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:19.199270  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:19.199337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:19.242426  447486 cri.go:89] found id: ""
	I1030 19:49:19.242455  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.242464  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:19.242474  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:19.242556  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:19.284061  447486 cri.go:89] found id: ""
	I1030 19:49:19.284092  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.284102  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:19.284108  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:19.284178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:19.317373  447486 cri.go:89] found id: ""
	I1030 19:49:19.317407  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.317420  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:19.317428  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:19.317491  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:19.354222  447486 cri.go:89] found id: ""
	I1030 19:49:19.354250  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.354259  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:19.354267  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:19.354329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:19.392948  447486 cri.go:89] found id: ""
	I1030 19:49:19.392980  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.392989  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:19.392996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:19.393053  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:19.438023  447486 cri.go:89] found id: ""
	I1030 19:49:19.438055  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.438066  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:19.438074  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:19.438144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:19.472179  447486 cri.go:89] found id: ""
	I1030 19:49:19.472208  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.472218  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:19.472226  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:19.472283  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:19.507164  447486 cri.go:89] found id: ""
	I1030 19:49:19.507195  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.507203  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:19.507213  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:19.507226  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:19.520898  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:19.520935  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:19.592204  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:19.592234  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:19.592263  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:19.668994  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:19.669045  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.707208  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:19.707240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:19.981085  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.981344  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.939994  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.439696  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.833592  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.333379  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.263035  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:22.276999  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:22.277089  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:22.310969  447486 cri.go:89] found id: ""
	I1030 19:49:22.311006  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.311017  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:22.311026  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:22.311097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:22.346282  447486 cri.go:89] found id: ""
	I1030 19:49:22.346311  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.346324  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:22.346332  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:22.346401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:22.384324  447486 cri.go:89] found id: ""
	I1030 19:49:22.384354  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.384372  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:22.384381  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:22.384441  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:22.419465  447486 cri.go:89] found id: ""
	I1030 19:49:22.419498  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.419509  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:22.419518  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:22.419582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:22.456161  447486 cri.go:89] found id: ""
	I1030 19:49:22.456196  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.456204  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:22.456211  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:22.456280  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:22.489075  447486 cri.go:89] found id: ""
	I1030 19:49:22.489102  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.489110  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:22.489119  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:22.489181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:22.521752  447486 cri.go:89] found id: ""
	I1030 19:49:22.521780  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.521789  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:22.521796  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:22.521847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:22.554946  447486 cri.go:89] found id: ""
	I1030 19:49:22.554985  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.554997  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:22.555010  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:22.555025  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:22.567877  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:22.567909  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:22.640062  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:22.640094  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:22.640110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:22.714946  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:22.714985  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:22.755560  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:22.755595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.306379  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:25.320883  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:25.320963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:25.356737  447486 cri.go:89] found id: ""
	I1030 19:49:25.356771  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.356782  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:25.356791  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:25.356856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:25.393371  447486 cri.go:89] found id: ""
	I1030 19:49:25.393409  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.393420  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:25.393429  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:25.393500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:25.428379  447486 cri.go:89] found id: ""
	I1030 19:49:25.428411  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.428425  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:25.428433  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:25.428505  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:25.473516  447486 cri.go:89] found id: ""
	I1030 19:49:25.473551  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.473562  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:25.473572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:25.473649  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:25.512508  447486 cri.go:89] found id: ""
	I1030 19:49:25.512535  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.512544  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:25.512550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:25.512611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:25.547646  447486 cri.go:89] found id: ""
	I1030 19:49:25.547691  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.547705  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:25.547713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:25.547782  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:25.582314  447486 cri.go:89] found id: ""
	I1030 19:49:25.582347  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.582356  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:25.582364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:25.582415  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:25.617305  447486 cri.go:89] found id: ""
	I1030 19:49:25.617343  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.617354  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:25.617367  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:25.617383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:25.658245  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:25.658283  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.710559  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:25.710598  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:25.724961  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:25.724995  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:25.796252  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:25.796283  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:25.796300  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:23.984899  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:25.985999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.939599  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:27.440032  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:26.334407  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.334588  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.374633  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:28.389468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:28.389549  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:28.425747  447486 cri.go:89] found id: ""
	I1030 19:49:28.425780  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.425792  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:28.425800  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:28.425956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:28.465221  447486 cri.go:89] found id: ""
	I1030 19:49:28.465258  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.465291  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:28.465303  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:28.465371  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:28.504184  447486 cri.go:89] found id: ""
	I1030 19:49:28.504217  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.504230  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:28.504240  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:28.504295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:28.536198  447486 cri.go:89] found id: ""
	I1030 19:49:28.536234  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.536247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:28.536255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:28.536340  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:28.572194  447486 cri.go:89] found id: ""
	I1030 19:49:28.572228  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.572240  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:28.572248  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:28.572312  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:28.608794  447486 cri.go:89] found id: ""
	I1030 19:49:28.608826  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.608838  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:28.608846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:28.608914  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:28.641664  447486 cri.go:89] found id: ""
	I1030 19:49:28.641698  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.641706  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:28.641714  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:28.641768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:28.675756  447486 cri.go:89] found id: ""
	I1030 19:49:28.675790  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.675800  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:28.675812  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:28.675829  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:28.690203  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:28.690237  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:28.755647  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:28.755674  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:28.755690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.837116  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:28.837149  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:28.877195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:28.877232  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.428091  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:31.442537  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:31.442619  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:31.479911  447486 cri.go:89] found id: ""
	I1030 19:49:31.479942  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.479953  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:31.479961  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:31.480029  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:31.517015  447486 cri.go:89] found id: ""
	I1030 19:49:31.517042  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.517050  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:31.517056  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:31.517107  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:31.549858  447486 cri.go:89] found id: ""
	I1030 19:49:31.549891  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.549900  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:31.549907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:31.549971  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:31.583490  447486 cri.go:89] found id: ""
	I1030 19:49:31.583524  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.583536  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:31.583551  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:31.583618  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:31.618270  447486 cri.go:89] found id: ""
	I1030 19:49:31.618308  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.618320  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:31.618328  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:31.618397  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:31.655416  447486 cri.go:89] found id: ""
	I1030 19:49:31.655448  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.655460  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:31.655468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:31.655530  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:31.689708  447486 cri.go:89] found id: ""
	I1030 19:49:31.689740  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.689751  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:31.689759  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:31.689823  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:31.724179  447486 cri.go:89] found id: ""
	I1030 19:49:31.724208  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.724219  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:31.724233  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:31.724249  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.774900  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:31.774939  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:31.788606  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:31.788635  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:28.481673  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.980999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:32.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:29.938506  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:31.940276  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.834322  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:33.333091  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:49:31.861360  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:31.861385  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:31.861398  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:31.935856  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:31.935896  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.477313  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:34.491530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:34.491597  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:34.525105  447486 cri.go:89] found id: ""
	I1030 19:49:34.525136  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.525145  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:34.525153  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:34.525215  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:34.560449  447486 cri.go:89] found id: ""
	I1030 19:49:34.560483  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.560495  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:34.560503  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:34.560558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:34.595278  447486 cri.go:89] found id: ""
	I1030 19:49:34.595325  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.595335  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:34.595342  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:34.595395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:34.628486  447486 cri.go:89] found id: ""
	I1030 19:49:34.628521  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.628533  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:34.628542  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:34.628614  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:34.663410  447486 cri.go:89] found id: ""
	I1030 19:49:34.663438  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.663448  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:34.663456  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:34.663520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:34.697053  447486 cri.go:89] found id: ""
	I1030 19:49:34.697086  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.697099  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:34.697107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:34.697178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:34.730910  447486 cri.go:89] found id: ""
	I1030 19:49:34.730943  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.730955  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:34.730963  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:34.731034  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:34.765725  447486 cri.go:89] found id: ""
	I1030 19:49:34.765762  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.765774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:34.765786  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:34.765807  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.802750  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:34.802786  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:34.853576  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:34.853614  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:34.868102  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:34.868139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:34.939985  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:34.940015  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:34.940027  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:35.480658  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.481068  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:34.442576  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:36.940088  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:35.333400  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.334425  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.833330  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.516479  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:37.529386  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:37.529453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:37.565889  447486 cri.go:89] found id: ""
	I1030 19:49:37.565923  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.565936  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:37.565945  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:37.566007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:37.598771  447486 cri.go:89] found id: ""
	I1030 19:49:37.598801  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.598811  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:37.598817  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:37.598869  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:37.632678  447486 cri.go:89] found id: ""
	I1030 19:49:37.632705  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.632714  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:37.632735  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:37.632795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:37.666642  447486 cri.go:89] found id: ""
	I1030 19:49:37.666673  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.666682  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:37.666688  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:37.666748  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:37.701203  447486 cri.go:89] found id: ""
	I1030 19:49:37.701233  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.701242  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:37.701249  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:37.701324  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:37.735614  447486 cri.go:89] found id: ""
	I1030 19:49:37.735649  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.735661  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:37.735669  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:37.735738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:37.771381  447486 cri.go:89] found id: ""
	I1030 19:49:37.771418  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.771430  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:37.771439  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:37.771501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:37.807870  447486 cri.go:89] found id: ""
	I1030 19:49:37.807908  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.807922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:37.807935  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:37.807952  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:37.860334  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:37.860367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:37.874340  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:37.874371  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:37.952874  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:37.952903  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:37.952916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:38.045318  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:38.045356  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:40.591278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:40.604970  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:40.605050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:40.639839  447486 cri.go:89] found id: ""
	I1030 19:49:40.639869  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.639880  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:40.639889  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:40.639952  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:40.674046  447486 cri.go:89] found id: ""
	I1030 19:49:40.674077  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.674087  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:40.674093  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:40.674164  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:40.710759  447486 cri.go:89] found id: ""
	I1030 19:49:40.710794  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.710806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:40.710815  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:40.710880  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:40.752439  447486 cri.go:89] found id: ""
	I1030 19:49:40.752471  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.752484  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:40.752493  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:40.752548  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:40.787985  447486 cri.go:89] found id: ""
	I1030 19:49:40.788021  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.788034  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:40.788042  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:40.788102  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:40.829282  447486 cri.go:89] found id: ""
	I1030 19:49:40.829320  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.829333  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:40.829341  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:40.829409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:40.863911  447486 cri.go:89] found id: ""
	I1030 19:49:40.863944  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.863953  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:40.863959  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:40.864026  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:40.901239  447486 cri.go:89] found id: ""
	I1030 19:49:40.901275  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.901287  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:40.901300  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:40.901321  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:40.955283  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:40.955323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:40.968733  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:40.968766  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:41.040213  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:41.040242  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:41.040256  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:41.125992  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:41.126035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:39.481593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.483403  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.441009  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.939182  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.834082  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:44.332428  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.667949  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:43.681633  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:43.681705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:43.725038  447486 cri.go:89] found id: ""
	I1030 19:49:43.725076  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.725085  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:43.725091  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:43.725149  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.761438  447486 cri.go:89] found id: ""
	I1030 19:49:43.761473  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.761486  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:43.761494  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:43.761566  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:43.795299  447486 cri.go:89] found id: ""
	I1030 19:49:43.795335  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.795347  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:43.795355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:43.795431  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:43.830545  447486 cri.go:89] found id: ""
	I1030 19:49:43.830582  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.830594  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:43.830601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:43.830670  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:43.867632  447486 cri.go:89] found id: ""
	I1030 19:49:43.867664  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.867676  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:43.867684  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:43.867753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:43.901315  447486 cri.go:89] found id: ""
	I1030 19:49:43.901346  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.901355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:43.901361  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:43.901412  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:43.934928  447486 cri.go:89] found id: ""
	I1030 19:49:43.934963  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.934975  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:43.934983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:43.935048  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:43.975407  447486 cri.go:89] found id: ""
	I1030 19:49:43.975441  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.975451  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:43.975472  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:43.975497  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:44.019281  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:44.019310  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:44.072363  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:44.072402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:44.085508  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:44.085538  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:44.159634  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:44.159666  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:44.159682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:46.739662  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:46.753190  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:46.753252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:46.790167  447486 cri.go:89] found id: ""
	I1030 19:49:46.790202  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.790211  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:46.790217  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:46.790272  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.988689  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.481139  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.939246  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.438847  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.333066  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.335463  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.828187  447486 cri.go:89] found id: ""
	I1030 19:49:46.828221  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.828230  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:46.828237  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:46.828305  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:46.865499  447486 cri.go:89] found id: ""
	I1030 19:49:46.865539  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.865551  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:46.865559  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:46.865612  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:46.899591  447486 cri.go:89] found id: ""
	I1030 19:49:46.899616  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.899625  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:46.899632  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:46.899681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:46.934818  447486 cri.go:89] found id: ""
	I1030 19:49:46.934850  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.934860  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:46.934868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:46.934933  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:46.971298  447486 cri.go:89] found id: ""
	I1030 19:49:46.971328  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.971340  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:46.971349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:46.971418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:47.010783  447486 cri.go:89] found id: ""
	I1030 19:49:47.010814  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.010825  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:47.010832  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:47.010896  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:47.044343  447486 cri.go:89] found id: ""
	I1030 19:49:47.044380  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.044392  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:47.044405  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:47.044421  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:47.094425  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:47.094459  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:47.110339  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:47.110368  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:47.183262  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:47.183290  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:47.183305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:47.262611  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:47.262651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:49.808195  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:49.821889  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:49.821963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:49.857296  447486 cri.go:89] found id: ""
	I1030 19:49:49.857339  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.857351  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:49.857359  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:49.857413  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:49.892614  447486 cri.go:89] found id: ""
	I1030 19:49:49.892648  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.892660  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:49.892668  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:49.892732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:49.929835  447486 cri.go:89] found id: ""
	I1030 19:49:49.929862  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.929871  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:49.929878  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:49.929940  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:49.965341  447486 cri.go:89] found id: ""
	I1030 19:49:49.965371  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.965379  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:49.965392  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:49.965449  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:50.000134  447486 cri.go:89] found id: ""
	I1030 19:49:50.000165  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.000177  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:50.000188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:50.000259  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:50.033848  447486 cri.go:89] found id: ""
	I1030 19:49:50.033876  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.033885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:50.033891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:50.033943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:50.073315  447486 cri.go:89] found id: ""
	I1030 19:49:50.073344  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.073354  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:50.073360  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:50.073421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:50.114232  447486 cri.go:89] found id: ""
	I1030 19:49:50.114266  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.114277  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:50.114290  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:50.114311  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:50.185407  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:50.185434  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:50.185448  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:50.270447  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:50.270494  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:50.308825  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:50.308855  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:50.363376  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:50.363417  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:48.982027  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:51.482972  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.439801  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.939120  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.833062  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.833132  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.834352  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.878475  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:52.892013  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:52.892088  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:52.928085  447486 cri.go:89] found id: ""
	I1030 19:49:52.928117  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.928126  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:52.928132  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:52.928185  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:52.963377  447486 cri.go:89] found id: ""
	I1030 19:49:52.963413  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.963426  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:52.963434  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:52.963493  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:53.000799  447486 cri.go:89] found id: ""
	I1030 19:49:53.000825  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.000834  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:53.000840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:53.000912  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:53.037429  447486 cri.go:89] found id: ""
	I1030 19:49:53.037463  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.037472  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:53.037478  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:53.037534  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:53.072392  447486 cri.go:89] found id: ""
	I1030 19:49:53.072425  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.072433  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:53.072446  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:53.072520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:53.108925  447486 cri.go:89] found id: ""
	I1030 19:49:53.108957  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.108970  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:53.108978  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:53.109050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:53.145409  447486 cri.go:89] found id: ""
	I1030 19:49:53.145445  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.145457  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:53.145466  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:53.145536  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:53.180756  447486 cri.go:89] found id: ""
	I1030 19:49:53.180784  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.180793  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:53.180803  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:53.180817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:53.234960  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:53.235010  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:53.249224  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:53.249255  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:53.313223  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:53.313245  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:53.313264  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:53.399715  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:53.399758  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.944332  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:55.961546  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:55.961616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:56.020603  447486 cri.go:89] found id: ""
	I1030 19:49:56.020634  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.020647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:56.020654  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:56.020725  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:56.065134  447486 cri.go:89] found id: ""
	I1030 19:49:56.065162  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.065170  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:56.065176  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:56.065239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:56.101358  447486 cri.go:89] found id: ""
	I1030 19:49:56.101386  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.101396  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:56.101405  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:56.101473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:56.135762  447486 cri.go:89] found id: ""
	I1030 19:49:56.135795  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.135805  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:56.135811  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:56.135863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:56.171336  447486 cri.go:89] found id: ""
	I1030 19:49:56.171371  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.171383  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:56.171391  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:56.171461  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:56.205643  447486 cri.go:89] found id: ""
	I1030 19:49:56.205674  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.205685  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:56.205693  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:56.205759  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:56.240853  447486 cri.go:89] found id: ""
	I1030 19:49:56.240885  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.240894  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:56.240901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:56.240973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:56.276577  447486 cri.go:89] found id: ""
	I1030 19:49:56.276612  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.276623  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:56.276636  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:56.276651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:56.328180  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:56.328220  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:56.341895  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:56.341923  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:56.414492  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:56.414523  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:56.414540  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:56.498439  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:56.498498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:53.980916  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:55.983077  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:53.439070  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.940107  446887 pod_ready.go:82] duration metric: took 4m0.007533629s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:49:54.940137  446887 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:49:54.940149  446887 pod_ready.go:39] duration metric: took 4m6.552777198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:49:54.940170  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:49:54.940206  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:54.940264  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:54.992682  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:54.992715  446887 cri.go:89] found id: ""
	I1030 19:49:54.992727  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:54.992790  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:54.997251  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:54.997313  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:55.034504  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.034542  446887 cri.go:89] found id: ""
	I1030 19:49:55.034552  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:55.034616  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.039551  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:55.039624  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:55.083294  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.083326  446887 cri.go:89] found id: ""
	I1030 19:49:55.083336  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:55.083407  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.087866  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:55.087932  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:55.125250  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.125353  446887 cri.go:89] found id: ""
	I1030 19:49:55.125372  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:55.125446  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.130688  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:55.130747  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:55.168792  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.168814  446887 cri.go:89] found id: ""
	I1030 19:49:55.168822  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:55.168877  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.173360  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:55.173424  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:55.209566  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.209590  446887 cri.go:89] found id: ""
	I1030 19:49:55.209599  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:55.209659  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.214190  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:55.214263  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:55.257056  446887 cri.go:89] found id: ""
	I1030 19:49:55.257091  446887 logs.go:282] 0 containers: []
	W1030 19:49:55.257103  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:55.257111  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:55.257165  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:55.300194  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.300224  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.300229  446887 cri.go:89] found id: ""
	I1030 19:49:55.300238  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:55.300290  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.304750  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.309249  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:49:55.309276  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.363959  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:49:55.363994  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.412667  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:49:55.412703  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.455381  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:55.455420  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.494657  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:55.494689  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.552740  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:55.552773  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:55.627724  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:55.627765  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:55.642263  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:49:55.642300  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:55.691079  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:55.691111  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.730111  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:49:55.730151  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.785155  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:55.785189  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:55.924592  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:55.924633  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.970229  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:55.970267  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:57.333378  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.334394  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.039071  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.053648  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.053722  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.097620  447486 cri.go:89] found id: ""
	I1030 19:49:59.097650  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.097661  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:59.097669  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.097738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.139136  447486 cri.go:89] found id: ""
	I1030 19:49:59.139176  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.139188  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:59.139199  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.139270  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.180322  447486 cri.go:89] found id: ""
	I1030 19:49:59.180361  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.180371  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:59.180384  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.180453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.217374  447486 cri.go:89] found id: ""
	I1030 19:49:59.217422  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.217434  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:59.217443  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.217498  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.257857  447486 cri.go:89] found id: ""
	I1030 19:49:59.257884  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.257894  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:59.257901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.257968  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.297679  447486 cri.go:89] found id: ""
	I1030 19:49:59.297713  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.297724  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:59.297733  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.297795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.341469  447486 cri.go:89] found id: ""
	I1030 19:49:59.341499  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.341509  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.341517  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:59.341587  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:59.381677  447486 cri.go:89] found id: ""
	I1030 19:49:59.381704  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.381713  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:59.381723  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.381735  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.441396  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.441428  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.457105  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:59.457139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:59.532023  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:59.532051  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.532064  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:59.621685  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:59.621720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:58.481425  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:00.481912  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.482130  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.010542  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.027463  446887 api_server.go:72] duration metric: took 4m17.923507495s to wait for apiserver process to appear ...
	I1030 19:49:59.027488  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:49:59.027524  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.027571  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.066364  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:59.066391  446887 cri.go:89] found id: ""
	I1030 19:49:59.066401  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:59.066463  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.072454  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.072535  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.118043  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:59.118072  446887 cri.go:89] found id: ""
	I1030 19:49:59.118081  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:59.118142  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.122806  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.122883  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.167475  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:59.167500  446887 cri.go:89] found id: ""
	I1030 19:49:59.167511  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:59.167577  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.172181  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.172255  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.210384  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:59.210411  446887 cri.go:89] found id: ""
	I1030 19:49:59.210419  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:59.210473  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.216032  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.216114  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.269770  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.269791  446887 cri.go:89] found id: ""
	I1030 19:49:59.269799  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:59.269851  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.274161  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.274239  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.313907  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.313936  446887 cri.go:89] found id: ""
	I1030 19:49:59.313946  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:59.314019  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.320687  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.320766  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.367710  446887 cri.go:89] found id: ""
	I1030 19:49:59.367740  446887 logs.go:282] 0 containers: []
	W1030 19:49:59.367752  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.367759  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:59.367826  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:59.422716  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.422744  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.422750  446887 cri.go:89] found id: ""
	I1030 19:49:59.422763  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:59.422827  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.428399  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.432404  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:59.432429  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.475798  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.475839  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.548960  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.548998  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.566839  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:59.566870  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.606181  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:59.606210  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.670134  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:59.670170  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.709224  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.709253  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:00.132147  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:00.132194  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:00.181124  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:00.181171  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:00.306545  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:00.306585  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:00.352129  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:00.352169  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:00.398083  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:00.398119  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:00.439813  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:00.439851  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:02.978477  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:50:02.983776  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:50:02.984791  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:50:02.984814  446887 api_server.go:131] duration metric: took 3.957319689s to wait for apiserver health ...
	I1030 19:50:02.984822  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:50:02.984844  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.984902  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:03.024715  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:03.024745  446887 cri.go:89] found id: ""
	I1030 19:50:03.024754  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:50:03.024820  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.029121  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:03.029188  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:03.064462  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:03.064489  446887 cri.go:89] found id: ""
	I1030 19:50:03.064500  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:50:03.064564  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.068587  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:03.068665  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:03.106880  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.106902  446887 cri.go:89] found id: ""
	I1030 19:50:03.106910  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:50:03.106978  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.111313  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:03.111388  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:03.155761  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:03.155791  446887 cri.go:89] found id: ""
	I1030 19:50:03.155801  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:50:03.155864  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.160616  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:03.160686  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:03.199028  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:03.199063  446887 cri.go:89] found id: ""
	I1030 19:50:03.199074  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:50:03.199149  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.203348  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:03.203414  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:03.257739  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:03.257769  446887 cri.go:89] found id: ""
	I1030 19:50:03.257780  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:50:03.257845  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.263357  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:03.263417  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:03.309752  446887 cri.go:89] found id: ""
	I1030 19:50:03.309779  446887 logs.go:282] 0 containers: []
	W1030 19:50:03.309787  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:03.309793  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:50:03.309843  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:50:03.351570  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.351593  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.351597  446887 cri.go:89] found id: ""
	I1030 19:50:03.351605  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:50:03.351656  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.364414  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.369070  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:03.369097  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:03.385129  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:03.385161  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:01.833117  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:04.334645  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.170623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:02.184885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.184975  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:02.223811  447486 cri.go:89] found id: ""
	I1030 19:50:02.223841  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.223849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:02.223856  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:02.223908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:02.260454  447486 cri.go:89] found id: ""
	I1030 19:50:02.260481  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.260491  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:02.260497  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:02.260554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:02.296542  447486 cri.go:89] found id: ""
	I1030 19:50:02.296569  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.296577  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:02.296583  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:02.296631  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:02.332168  447486 cri.go:89] found id: ""
	I1030 19:50:02.332199  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.332211  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:02.332219  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:02.332287  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:02.366539  447486 cri.go:89] found id: ""
	I1030 19:50:02.366575  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.366586  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:02.366595  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:02.366659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:02.401859  447486 cri.go:89] found id: ""
	I1030 19:50:02.401894  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.401915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:02.401923  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:02.401991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:02.446061  447486 cri.go:89] found id: ""
	I1030 19:50:02.446097  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.446108  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:02.446116  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:02.446181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:02.488233  447486 cri.go:89] found id: ""
	I1030 19:50:02.488257  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.488265  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:02.488274  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:02.488294  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:02.544517  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:02.544554  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:02.558143  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:02.558179  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:02.628679  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:02.628706  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:02.628723  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:02.710246  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:02.710293  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.254846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:05.269536  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:05.269599  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:05.303724  447486 cri.go:89] found id: ""
	I1030 19:50:05.303753  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.303761  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:05.303767  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:05.303819  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:05.339268  447486 cri.go:89] found id: ""
	I1030 19:50:05.339301  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.339322  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:05.339330  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:05.339405  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:05.375892  447486 cri.go:89] found id: ""
	I1030 19:50:05.375923  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.375930  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:05.375936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:05.375988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:05.413197  447486 cri.go:89] found id: ""
	I1030 19:50:05.413232  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.413243  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:05.413252  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:05.413329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:05.452095  447486 cri.go:89] found id: ""
	I1030 19:50:05.452122  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.452130  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:05.452137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:05.452193  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:05.490694  447486 cri.go:89] found id: ""
	I1030 19:50:05.490731  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.490744  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:05.490753  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:05.490808  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:05.523961  447486 cri.go:89] found id: ""
	I1030 19:50:05.523992  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.524001  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:05.524008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:05.524060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:05.558631  447486 cri.go:89] found id: ""
	I1030 19:50:05.558664  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.558673  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:05.558684  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:05.558699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.596929  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:05.596958  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:05.647294  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:05.647332  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:05.661349  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:05.661377  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:05.730268  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:05.730299  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:05.730323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.434675  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:03.434708  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.474767  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:50:03.474803  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.510301  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:03.510331  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.887871  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:50:03.887912  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.930529  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:03.930563  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:03.971064  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:03.971102  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:04.040593  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:04.040632  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:04.157377  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:04.157418  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:04.205779  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:04.205816  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:04.251434  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:50:04.251470  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:04.288713  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:50:04.288747  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:06.849298  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:50:06.849329  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.849334  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.849340  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.849352  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.849358  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.849367  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.849373  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.849377  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.849384  446887 system_pods.go:74] duration metric: took 3.864557334s to wait for pod list to return data ...
	I1030 19:50:06.849394  446887 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:50:06.852015  446887 default_sa.go:45] found service account: "default"
	I1030 19:50:06.852037  446887 default_sa.go:55] duration metric: took 2.63686ms for default service account to be created ...
	I1030 19:50:06.852046  446887 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:50:06.856920  446887 system_pods.go:86] 8 kube-system pods found
	I1030 19:50:06.856945  446887 system_pods.go:89] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.856953  446887 system_pods.go:89] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.856959  446887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.856966  446887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.856972  446887 system_pods.go:89] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.856979  446887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.856996  446887 system_pods.go:89] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.857005  446887 system_pods.go:89] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.857015  446887 system_pods.go:126] duration metric: took 4.962745ms to wait for k8s-apps to be running ...
	I1030 19:50:06.857025  446887 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:50:06.857086  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:06.874176  446887 system_svc.go:56] duration metric: took 17.144628ms WaitForService to wait for kubelet
	I1030 19:50:06.874206  446887 kubeadm.go:582] duration metric: took 4m25.770253397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:50:06.874230  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:50:06.876962  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:50:06.876987  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:50:06.877004  446887 node_conditions.go:105] duration metric: took 2.768174ms to run NodePressure ...
	I1030 19:50:06.877025  446887 start.go:241] waiting for startup goroutines ...
	I1030 19:50:06.877034  446887 start.go:246] waiting for cluster config update ...
	I1030 19:50:06.877070  446887 start.go:255] writing updated cluster config ...
	I1030 19:50:06.877355  446887 ssh_runner.go:195] Run: rm -f paused
	I1030 19:50:06.927147  446887 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:50:06.929103  446887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-768989" cluster and "default" namespace by default
	I1030 19:50:04.981923  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.982630  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.834029  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.834616  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.312167  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:08.327121  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:08.327206  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:08.364871  447486 cri.go:89] found id: ""
	I1030 19:50:08.364905  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.364916  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:08.364924  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:08.364982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:08.399179  447486 cri.go:89] found id: ""
	I1030 19:50:08.399215  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.399225  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:08.399231  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:08.399286  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:08.434308  447486 cri.go:89] found id: ""
	I1030 19:50:08.434340  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.434350  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:08.434356  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:08.434409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:08.477152  447486 cri.go:89] found id: ""
	I1030 19:50:08.477184  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.477193  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:08.477204  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:08.477274  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:08.513678  447486 cri.go:89] found id: ""
	I1030 19:50:08.513706  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.513716  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:08.513725  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:08.513789  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:08.551427  447486 cri.go:89] found id: ""
	I1030 19:50:08.551459  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.551478  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:08.551485  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:08.551550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:08.584224  447486 cri.go:89] found id: ""
	I1030 19:50:08.584260  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.584272  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:08.584282  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:08.584351  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:08.617603  447486 cri.go:89] found id: ""
	I1030 19:50:08.617638  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.617649  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:08.617660  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:08.617674  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:08.694201  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:08.694229  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:08.694247  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.775457  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:08.775500  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:08.816452  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:08.816496  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:08.868077  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:08.868114  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.383130  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:11.397672  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:11.397758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:11.431923  447486 cri.go:89] found id: ""
	I1030 19:50:11.431959  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.431971  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:11.431980  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:11.432050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:11.466959  447486 cri.go:89] found id: ""
	I1030 19:50:11.466996  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.467009  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:11.467018  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:11.467093  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:11.506399  447486 cri.go:89] found id: ""
	I1030 19:50:11.506425  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.506437  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:11.506444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:11.506529  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:11.538606  447486 cri.go:89] found id: ""
	I1030 19:50:11.538635  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.538643  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:11.538649  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:11.538700  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:11.573265  447486 cri.go:89] found id: ""
	I1030 19:50:11.573296  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.573304  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:11.573310  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:11.573364  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:11.608522  447486 cri.go:89] found id: ""
	I1030 19:50:11.608549  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.608558  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:11.608569  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:11.608629  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:11.639758  447486 cri.go:89] found id: ""
	I1030 19:50:11.639784  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.639792  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:11.639797  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:11.639846  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:11.673381  447486 cri.go:89] found id: ""
	I1030 19:50:11.673414  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.673426  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:11.673439  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:11.673454  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:11.727368  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:11.727414  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.741267  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:11.741301  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:09.481159  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.483339  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.334468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:13.832615  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:50:11.808126  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:11.808158  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:11.808174  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:11.888676  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:11.888713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.431637  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:14.445315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:14.445392  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:14.482059  447486 cri.go:89] found id: ""
	I1030 19:50:14.482097  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.482110  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:14.482118  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:14.482186  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:14.520802  447486 cri.go:89] found id: ""
	I1030 19:50:14.520834  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.520843  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:14.520849  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:14.520900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:14.559965  447486 cri.go:89] found id: ""
	I1030 19:50:14.559996  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.560006  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:14.560012  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:14.560062  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:14.601831  447486 cri.go:89] found id: ""
	I1030 19:50:14.601865  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.601875  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:14.601881  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:14.601932  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:14.635307  447486 cri.go:89] found id: ""
	I1030 19:50:14.635339  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.635348  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:14.635355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:14.635418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:14.668618  447486 cri.go:89] found id: ""
	I1030 19:50:14.668648  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.668657  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:14.668664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:14.668726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:14.702597  447486 cri.go:89] found id: ""
	I1030 19:50:14.702633  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.702644  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:14.702653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:14.702715  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:14.736860  447486 cri.go:89] found id: ""
	I1030 19:50:14.736899  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.736911  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:14.736925  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:14.736942  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:14.822015  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:14.822060  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.860153  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:14.860195  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:14.912230  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:14.912269  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:14.927032  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:14.927067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:14.994401  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:13.975124  446965 pod_ready.go:82] duration metric: took 4m0.000158179s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	E1030 19:50:13.975173  446965 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" (will not retry!)
	I1030 19:50:13.975201  446965 pod_ready.go:39] duration metric: took 4m14.686087419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:13.975238  446965 kubeadm.go:597] duration metric: took 4m22.157012059s to restartPrimaryControlPlane
	W1030 19:50:13.975313  446965 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:13.975366  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:15.833986  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.835468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.494865  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:17.509934  447486 kubeadm.go:597] duration metric: took 4m3.074434895s to restartPrimaryControlPlane
	W1030 19:50:17.510016  447486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:17.510051  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:18.496415  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:18.512328  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:18.522293  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:18.532752  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:18.532772  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:18.532823  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:18.542501  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:18.542560  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:18.552660  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:18.562585  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:18.562649  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:18.572321  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.581633  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:18.581689  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.592770  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:18.602414  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:18.602477  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:18.612334  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:18.844753  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:20.333715  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:22.832817  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:24.833349  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:27.332723  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:29.335009  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:31.832584  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:33.834506  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:36.333902  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:38.833159  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:40.157555  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.182163055s)
	I1030 19:50:40.157637  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:40.174413  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:40.184817  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:40.195446  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:40.195475  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:40.195527  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:40.205509  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:40.205575  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:40.217343  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:40.227666  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:40.227729  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:40.237594  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.247151  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:40.247209  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.256854  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:40.266306  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:40.266379  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:40.276409  446965 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:40.322080  446965 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 19:50:40.322174  446965 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:50:40.433056  446965 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:50:40.433251  446965 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:50:40.433390  446965 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 19:50:40.445085  446965 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:50:40.447192  446965 out.go:235]   - Generating certificates and keys ...
	I1030 19:50:40.447301  446965 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:50:40.447395  446965 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:50:40.447512  446965 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:50:40.447600  446965 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:50:40.447735  446965 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:50:40.447825  446965 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:50:40.447912  446965 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:50:40.447999  446965 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:50:40.448108  446965 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:50:40.448208  446965 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:50:40.448266  446965 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:50:40.448345  446965 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:50:40.590735  446965 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:50:40.714139  446965 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 19:50:40.808334  446965 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:50:40.940687  446965 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:50:41.085266  446965 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:50:41.085840  446965 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:50:41.088415  446965 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:50:41.090229  446965 out.go:235]   - Booting up control plane ...
	I1030 19:50:41.090349  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:50:41.090466  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:50:41.090573  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:50:41.112262  446965 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:50:41.118809  446965 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:50:41.118919  446965 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:50:41.243915  446965 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 19:50:41.244093  446965 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 19:50:41.745362  446965 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.630697ms
	I1030 19:50:41.745513  446965 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 19:50:40.834005  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:42.834286  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:46.748431  446965 kubeadm.go:310] [api-check] The API server is healthy after 5.001587935s
	I1030 19:50:46.762271  446965 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 19:50:46.781785  446965 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 19:50:46.806338  446965 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 19:50:46.806613  446965 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-042402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 19:50:46.819762  446965 kubeadm.go:310] [bootstrap-token] Using token: k711fn.1we2gia9o31jm3ip
	I1030 19:50:46.821026  446965 out.go:235]   - Configuring RBAC rules ...
	I1030 19:50:46.821137  446965 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 19:50:46.827537  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 19:50:46.836653  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 19:50:46.844891  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 19:50:46.848423  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 19:50:46.851674  446965 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 19:50:47.157946  446965 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 19:50:47.615774  446965 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 19:50:48.154429  446965 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 19:50:48.159547  446965 kubeadm.go:310] 
	I1030 19:50:48.159636  446965 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 19:50:48.159648  446965 kubeadm.go:310] 
	I1030 19:50:48.159762  446965 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 19:50:48.159776  446965 kubeadm.go:310] 
	I1030 19:50:48.159806  446965 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 19:50:48.159880  446965 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 19:50:48.159934  446965 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 19:50:48.159944  446965 kubeadm.go:310] 
	I1030 19:50:48.160029  446965 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 19:50:48.160040  446965 kubeadm.go:310] 
	I1030 19:50:48.160123  446965 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 19:50:48.160154  446965 kubeadm.go:310] 
	I1030 19:50:48.160242  446965 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 19:50:48.160351  446965 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 19:50:48.160440  446965 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 19:50:48.160450  446965 kubeadm.go:310] 
	I1030 19:50:48.160570  446965 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 19:50:48.160652  446965 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 19:50:48.160660  446965 kubeadm.go:310] 
	I1030 19:50:48.160729  446965 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.160818  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 19:50:48.160838  446965 kubeadm.go:310] 	--control-plane 
	I1030 19:50:48.160846  446965 kubeadm.go:310] 
	I1030 19:50:48.160943  446965 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 19:50:48.160955  446965 kubeadm.go:310] 
	I1030 19:50:48.161065  446965 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.161205  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 19:50:48.162302  446965 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:48.162390  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:50:48.162408  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:50:48.164041  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:50:45.333255  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:47.334686  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:49.832993  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:48.165318  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:50:48.176702  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:50:48.199681  446965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:50:48.199776  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.199840  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-042402 minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=embed-certs-042402 minikube.k8s.io/primary=true
	I1030 19:50:48.226617  446965 ops.go:34] apiserver oom_adj: -16
	I1030 19:50:48.404620  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.905366  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.405663  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.904925  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.405082  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.905099  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.404860  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.905534  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.405432  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.905289  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:53.010770  446965 kubeadm.go:1113] duration metric: took 4.811061462s to wait for elevateKubeSystemPrivileges
	I1030 19:50:53.010818  446965 kubeadm.go:394] duration metric: took 5m1.251362756s to StartCluster
	I1030 19:50:53.010849  446965 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.010948  446965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:50:53.012997  446965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.013284  446965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:50:53.013411  446965 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:50:53.013518  446965 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-042402"
	I1030 19:50:53.013539  446965 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-042402"
	I1030 19:50:53.013539  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1030 19:50:53.013550  446965 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:50:53.013600  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013546  446965 addons.go:69] Setting default-storageclass=true in profile "embed-certs-042402"
	I1030 19:50:53.013605  446965 addons.go:69] Setting metrics-server=true in profile "embed-certs-042402"
	I1030 19:50:53.013635  446965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-042402"
	I1030 19:50:53.013642  446965 addons.go:234] Setting addon metrics-server=true in "embed-certs-042402"
	W1030 19:50:53.013650  446965 addons.go:243] addon metrics-server should already be in state true
	I1030 19:50:53.013675  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013947  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014005  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014010  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014022  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014058  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014112  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.015033  446965 out.go:177] * Verifying Kubernetes components...
	I1030 19:50:53.016527  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:50:53.030033  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I1030 19:50:53.030290  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1030 19:50:53.030618  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.030733  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.031192  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031209  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031342  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031356  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031577  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.031773  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.031801  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.032289  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1030 19:50:53.032910  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.032953  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.033170  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.033684  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.033699  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.035082  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.035104  446965 addons.go:234] Setting addon default-storageclass=true in "embed-certs-042402"
	W1030 19:50:53.035124  446965 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:50:53.035158  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.035461  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.035492  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.036666  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.036697  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.054685  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1030 19:50:53.055271  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.055621  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I1030 19:50:53.055762  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.055779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.056073  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.056192  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.056410  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.056665  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.056688  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.057099  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.057693  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.057741  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.058427  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.058756  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I1030 19:50:53.059684  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.060230  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.060253  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.060597  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.060806  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.060880  446965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:50:53.062367  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.062469  446965 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.062506  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:50:53.062526  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.063955  446965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:50:53.065131  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:50:53.065153  446965 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:50:53.065173  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.065987  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066607  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.066640  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066723  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.066956  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.067102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.067254  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.068475  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.068916  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.068939  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.069098  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.069288  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.069457  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.069625  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.075920  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1030 19:50:53.076341  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.076758  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.076779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.077042  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.077238  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.078809  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.079065  446965 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.079088  446965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:50:53.079105  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.081873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082309  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.082339  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082515  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.082705  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.082863  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.083061  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.274313  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:50:53.305281  446965 node_ready.go:35] waiting up to 6m0s for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313184  446965 node_ready.go:49] node "embed-certs-042402" has status "Ready":"True"
	I1030 19:50:53.313217  446965 node_ready.go:38] duration metric: took 7.892097ms for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313230  446965 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:53.321668  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:50:53.406960  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.427287  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:50:53.427324  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:50:53.475089  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.485983  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:50:53.486013  446965 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:50:53.570871  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:53.570904  446965 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:50:53.670898  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:54.545328  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.138329529s)
	I1030 19:50:54.545384  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545383  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.070259573s)
	I1030 19:50:54.545399  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545426  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545445  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545732  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545748  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545757  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545761  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545765  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545787  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545794  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545802  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545808  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.546139  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546162  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.546465  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.546468  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546507  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.576380  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.576408  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.576738  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.576787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.576804  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.703670  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032714873s)
	I1030 19:50:54.703724  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.703736  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704025  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.704059  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704076  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704085  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.704104  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704350  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704362  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704374  446965 addons.go:475] Verifying addon metrics-server=true in "embed-certs-042402"
	I1030 19:50:54.706330  446965 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:50:51.833654  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.333879  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.707723  446965 addons.go:510] duration metric: took 1.694322523s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:50:55.328470  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:57.828224  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:56.832967  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:58.833284  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:59.828636  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:01.828151  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.828178  446965 pod_ready.go:82] duration metric: took 8.506481998s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.828187  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833094  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.833121  446965 pod_ready.go:82] duration metric: took 4.926401ms for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833133  446965 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837391  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.837410  446965 pod_ready.go:82] duration metric: took 4.27047ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837419  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344200  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.344224  446965 pod_ready.go:82] duration metric: took 506.798667ms for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344233  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349020  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.349042  446965 pod_ready.go:82] duration metric: took 4.801739ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349055  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626109  446965 pod_ready.go:93] pod "kube-proxy-m9zwz" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.626137  446965 pod_ready.go:82] duration metric: took 277.074567ms for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626146  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027456  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:03.027482  446965 pod_ready.go:82] duration metric: took 401.329277ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027493  446965 pod_ready.go:39] duration metric: took 9.714247169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:03.027513  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:03.027579  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:03.043403  446965 api_server.go:72] duration metric: took 10.030078869s to wait for apiserver process to appear ...
	I1030 19:51:03.043431  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:03.043456  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:51:03.048722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:51:03.049572  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:03.049595  446965 api_server.go:131] duration metric: took 6.156928ms to wait for apiserver health ...
	I1030 19:51:03.049603  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:03.233170  446965 system_pods.go:59] 9 kube-system pods found
	I1030 19:51:03.233205  446965 system_pods.go:61] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.233212  446965 system_pods.go:61] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.233217  446965 system_pods.go:61] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.233222  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.233227  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.233231  446965 system_pods.go:61] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.233236  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.233247  446965 system_pods.go:61] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.233255  446965 system_pods.go:61] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.233272  446965 system_pods.go:74] duration metric: took 183.660307ms to wait for pod list to return data ...
	I1030 19:51:03.233287  446965 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:03.427520  446965 default_sa.go:45] found service account: "default"
	I1030 19:51:03.427550  446965 default_sa.go:55] duration metric: took 194.254547ms for default service account to be created ...
	I1030 19:51:03.427562  446965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:03.629316  446965 system_pods.go:86] 9 kube-system pods found
	I1030 19:51:03.629351  446965 system_pods.go:89] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.629364  446965 system_pods.go:89] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.629370  446965 system_pods.go:89] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.629377  446965 system_pods.go:89] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.629381  446965 system_pods.go:89] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.629386  446965 system_pods.go:89] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.629391  446965 system_pods.go:89] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.629399  446965 system_pods.go:89] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.629405  446965 system_pods.go:89] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.629418  446965 system_pods.go:126] duration metric: took 201.847233ms to wait for k8s-apps to be running ...
	I1030 19:51:03.629432  446965 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:03.629486  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:03.649120  446965 system_svc.go:56] duration metric: took 19.675022ms WaitForService to wait for kubelet
	I1030 19:51:03.649166  446965 kubeadm.go:582] duration metric: took 10.635844977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:03.649192  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:03.826763  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:03.826790  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:03.826803  446965 node_conditions.go:105] duration metric: took 177.604616ms to run NodePressure ...
	I1030 19:51:03.826819  446965 start.go:241] waiting for startup goroutines ...
	I1030 19:51:03.826827  446965 start.go:246] waiting for cluster config update ...
	I1030 19:51:03.826841  446965 start.go:255] writing updated cluster config ...
	I1030 19:51:03.827126  446965 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:03.877974  446965 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:03.880121  446965 out.go:177] * Done! kubectl is now configured to use "embed-certs-042402" cluster and "default" namespace by default
	I1030 19:51:00.833673  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:03.333042  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:05.333431  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:07.833229  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:09.833772  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:10.833131  446736 pod_ready.go:82] duration metric: took 4m0.006526983s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:51:10.833166  446736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:51:10.833178  446736 pod_ready.go:39] duration metric: took 4m7.416690025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:10.833200  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:10.833239  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:10.833300  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:10.884016  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:10.884046  446736 cri.go:89] found id: ""
	I1030 19:51:10.884055  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:10.884108  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.888789  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:10.888857  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:10.931994  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:10.932037  446736 cri.go:89] found id: ""
	I1030 19:51:10.932047  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:10.932097  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.937113  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:10.937181  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:10.977951  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:10.977982  446736 cri.go:89] found id: ""
	I1030 19:51:10.977993  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:10.978050  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.982791  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:10.982863  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:11.021741  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.021770  446736 cri.go:89] found id: ""
	I1030 19:51:11.021780  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:11.021837  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.026590  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:11.026653  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:11.068839  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.068873  446736 cri.go:89] found id: ""
	I1030 19:51:11.068885  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:11.068946  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.073103  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:11.073171  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:11.108404  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.108432  446736 cri.go:89] found id: ""
	I1030 19:51:11.108443  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:11.108506  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.112903  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:11.112974  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:11.153767  446736 cri.go:89] found id: ""
	I1030 19:51:11.153800  446736 logs.go:282] 0 containers: []
	W1030 19:51:11.153812  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:11.153821  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:11.153892  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:11.194649  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.194681  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.194687  446736 cri.go:89] found id: ""
	I1030 19:51:11.194697  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:11.194770  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.199037  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.202957  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:11.202984  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:11.246187  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:11.246220  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.286608  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:11.286643  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.339119  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:11.339157  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.376624  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:11.376653  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.411401  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:11.411431  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:11.481668  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:11.481710  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:11.497767  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:11.497799  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:11.612001  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:11.612034  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:11.656553  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:11.656589  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:11.695387  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:11.695428  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.732386  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:11.732419  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:12.217007  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:12.217056  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:14.769155  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:14.787096  446736 api_server.go:72] duration metric: took 4m17.097569041s to wait for apiserver process to appear ...
	I1030 19:51:14.787128  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:14.787176  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:14.787235  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:14.823506  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:14.823533  446736 cri.go:89] found id: ""
	I1030 19:51:14.823541  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:14.823595  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.828125  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:14.828214  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:14.867890  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:14.867914  446736 cri.go:89] found id: ""
	I1030 19:51:14.867922  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:14.867970  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.873213  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:14.873283  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:14.913068  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:14.913103  446736 cri.go:89] found id: ""
	I1030 19:51:14.913114  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:14.913179  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.918380  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:14.918459  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:14.956150  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:14.956177  446736 cri.go:89] found id: ""
	I1030 19:51:14.956187  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:14.956294  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.960781  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:14.960836  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:15.001804  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.001833  446736 cri.go:89] found id: ""
	I1030 19:51:15.001844  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:15.001893  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.006341  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:15.006401  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:15.045202  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.045236  446736 cri.go:89] found id: ""
	I1030 19:51:15.045247  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:15.045326  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.051967  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:15.052031  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:15.091569  446736 cri.go:89] found id: ""
	I1030 19:51:15.091596  446736 logs.go:282] 0 containers: []
	W1030 19:51:15.091604  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:15.091611  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:15.091668  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:15.135521  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:15.135551  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:15.135557  446736 cri.go:89] found id: ""
	I1030 19:51:15.135567  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:15.135633  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.140215  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.145490  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:15.145514  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:15.205939  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:15.205972  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:15.240157  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:15.240194  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.277168  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:15.277200  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:15.708451  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:15.708499  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:15.750544  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:15.750577  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:15.820071  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:15.820113  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:15.870259  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:15.870293  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:15.919968  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:15.919998  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.976948  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:15.976992  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:16.014451  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:16.014498  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:16.047766  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:16.047806  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:16.070539  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:16.070567  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:18.677834  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:51:18.682862  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:51:18.684023  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:18.684046  446736 api_server.go:131] duration metric: took 3.896911154s to wait for apiserver health ...
	I1030 19:51:18.684055  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:18.684083  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:18.684130  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:18.724815  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:18.724848  446736 cri.go:89] found id: ""
	I1030 19:51:18.724860  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:18.724928  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.729332  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:18.729391  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:18.767614  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:18.767642  446736 cri.go:89] found id: ""
	I1030 19:51:18.767651  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:18.767705  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.772420  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:18.772525  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:18.811459  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:18.811489  446736 cri.go:89] found id: ""
	I1030 19:51:18.811501  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:18.811563  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.816844  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:18.816906  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:18.853273  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:18.853299  446736 cri.go:89] found id: ""
	I1030 19:51:18.853308  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:18.853362  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.857867  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:18.857946  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:18.907021  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:18.907052  446736 cri.go:89] found id: ""
	I1030 19:51:18.907063  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:18.907126  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.913432  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:18.913506  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:18.978047  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:18.978072  446736 cri.go:89] found id: ""
	I1030 19:51:18.978083  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:18.978150  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.983158  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:18.983241  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:19.018992  446736 cri.go:89] found id: ""
	I1030 19:51:19.019018  446736 logs.go:282] 0 containers: []
	W1030 19:51:19.019026  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:19.019035  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:19.019094  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:19.053821  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.053850  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.053855  446736 cri.go:89] found id: ""
	I1030 19:51:19.053862  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:19.053922  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.063575  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.069254  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:19.069283  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:19.139641  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:19.139700  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:19.198020  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:19.198059  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:19.239685  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:19.239727  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:19.281510  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:19.281545  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.317842  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:19.317872  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:19.659645  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:19.659697  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:19.678087  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:19.678121  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:19.778504  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:19.778540  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:19.826520  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:19.826552  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:19.863959  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:19.864011  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:19.915777  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:19.915814  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.953036  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:19.953069  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:22.502129  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:51:22.502162  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.502167  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.502172  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.502175  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.502179  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.502182  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.502188  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.502193  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.502201  446736 system_pods.go:74] duration metric: took 3.818141259s to wait for pod list to return data ...
	I1030 19:51:22.502209  446736 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:22.504541  446736 default_sa.go:45] found service account: "default"
	I1030 19:51:22.504562  446736 default_sa.go:55] duration metric: took 2.346763ms for default service account to be created ...
	I1030 19:51:22.504570  446736 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:22.509016  446736 system_pods.go:86] 8 kube-system pods found
	I1030 19:51:22.509039  446736 system_pods.go:89] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.509044  446736 system_pods.go:89] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.509048  446736 system_pods.go:89] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.509052  446736 system_pods.go:89] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.509055  446736 system_pods.go:89] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.509058  446736 system_pods.go:89] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.509101  446736 system_pods.go:89] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.509112  446736 system_pods.go:89] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.509119  446736 system_pods.go:126] duration metric: took 4.544102ms to wait for k8s-apps to be running ...
	I1030 19:51:22.509125  446736 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:22.509172  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:22.524883  446736 system_svc.go:56] duration metric: took 15.747977ms WaitForService to wait for kubelet
	I1030 19:51:22.524906  446736 kubeadm.go:582] duration metric: took 4m24.835384605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:22.524929  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:22.528315  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:22.528334  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:22.528345  446736 node_conditions.go:105] duration metric: took 3.411421ms to run NodePressure ...
	I1030 19:51:22.528357  446736 start.go:241] waiting for startup goroutines ...
	I1030 19:51:22.528364  446736 start.go:246] waiting for cluster config update ...
	I1030 19:51:22.528374  446736 start.go:255] writing updated cluster config ...
	I1030 19:51:22.528621  446736 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:22.577143  446736 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:22.580061  446736 out.go:177] * Done! kubectl is now configured to use "no-preload-960512" cluster and "default" namespace by default
	I1030 19:52:15.582907  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:52:15.583009  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:52:15.584345  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:15.584419  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:15.584522  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:15.584659  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:15.584763  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:15.584827  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:15.586931  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:15.587016  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:15.587074  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:15.587145  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:15.587198  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:15.587271  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:15.587339  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:15.587402  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:15.587455  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:15.587517  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:15.587577  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:15.587608  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:15.587682  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:15.587759  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:15.587846  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:15.587924  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:15.587988  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:15.588076  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:15.588148  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:15.588180  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:15.588267  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:15.589722  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:15.589834  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:15.589932  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:15.590014  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:15.590128  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:15.590285  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:15.590336  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:15.590388  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590560  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590642  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590842  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590946  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591155  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591253  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591513  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591609  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591841  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591855  447486 kubeadm.go:310] 
	I1030 19:52:15.591900  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:52:15.591956  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:52:15.591966  447486 kubeadm.go:310] 
	I1030 19:52:15.592008  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:52:15.592051  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:52:15.592192  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:52:15.592204  447486 kubeadm.go:310] 
	I1030 19:52:15.592318  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:52:15.592360  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:52:15.592391  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:52:15.592397  447486 kubeadm.go:310] 
	I1030 19:52:15.592511  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:52:15.592592  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:52:15.592600  447486 kubeadm.go:310] 
	I1030 19:52:15.592733  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:52:15.592850  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:52:15.592959  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:52:15.593059  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:52:15.593138  447486 kubeadm.go:310] 
	W1030 19:52:15.593236  447486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:52:15.593289  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:52:16.049810  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:52:16.065820  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:52:16.076166  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:52:16.076192  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:52:16.076241  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:52:16.085309  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:52:16.085380  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:52:16.094868  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:52:16.104343  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:52:16.104395  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:52:16.113939  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.122836  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:52:16.122885  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.132083  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:52:16.141441  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:52:16.141487  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:52:16.150710  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:52:16.222070  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:16.222183  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:16.366061  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:16.366194  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:16.366352  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:16.541086  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:16.543200  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:16.543303  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:16.543398  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:16.543523  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:16.543625  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:16.543749  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:16.543848  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:16.543942  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:16.544020  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:16.544096  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:16.544193  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:16.544252  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:16.544343  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:16.637454  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:16.829430  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:16.985259  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:17.072312  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:17.092511  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:17.093595  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:17.093654  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:17.228039  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:17.229647  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:17.229766  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:17.237333  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:17.239644  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:17.239774  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:17.241037  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:57.243167  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:57.243769  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:57.244072  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:02.244240  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:02.244563  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:12.244991  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:12.245293  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:32.246428  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:32.246697  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.247834  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:54:12.248150  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.248173  447486 kubeadm.go:310] 
	I1030 19:54:12.248226  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:54:12.248308  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:54:12.248336  447486 kubeadm.go:310] 
	I1030 19:54:12.248386  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:54:12.248449  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:54:12.248598  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:54:12.248609  447486 kubeadm.go:310] 
	I1030 19:54:12.248747  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:54:12.248811  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:54:12.248867  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:54:12.248876  447486 kubeadm.go:310] 
	I1030 19:54:12.249013  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:54:12.249111  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:54:12.249129  447486 kubeadm.go:310] 
	I1030 19:54:12.249280  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:54:12.249447  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:54:12.249564  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:54:12.249662  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:54:12.249708  447486 kubeadm.go:310] 
	I1030 19:54:12.249878  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:54:12.250015  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:54:12.250208  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:54:12.250221  447486 kubeadm.go:394] duration metric: took 7m57.874179721s to StartCluster
	I1030 19:54:12.250311  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:54:12.250399  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:54:12.292692  447486 cri.go:89] found id: ""
	I1030 19:54:12.292749  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.292760  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:54:12.292770  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:54:12.292840  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:54:12.329792  447486 cri.go:89] found id: ""
	I1030 19:54:12.329825  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.329835  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:54:12.329843  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:54:12.329905  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:54:12.364661  447486 cri.go:89] found id: ""
	I1030 19:54:12.364693  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.364702  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:54:12.364709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:54:12.364764  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:54:12.400842  447486 cri.go:89] found id: ""
	I1030 19:54:12.400870  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.400878  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:54:12.400885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:54:12.400943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:54:12.440135  447486 cri.go:89] found id: ""
	I1030 19:54:12.440164  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.440172  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:54:12.440178  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:54:12.440228  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:54:12.476365  447486 cri.go:89] found id: ""
	I1030 19:54:12.476403  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.476416  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:54:12.476425  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:54:12.476503  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:54:12.519669  447486 cri.go:89] found id: ""
	I1030 19:54:12.519702  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.519715  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:54:12.519724  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:54:12.519791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:54:12.554180  447486 cri.go:89] found id: ""
	I1030 19:54:12.554218  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.554230  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:54:12.554244  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:54:12.554261  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:54:12.669617  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:54:12.669660  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:54:12.708361  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:54:12.708392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:54:12.763103  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:54:12.763145  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:54:12.778676  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:54:12.778712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:54:12.865694  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1030 19:54:12.865732  447486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:54:12.865797  447486 out.go:270] * 
	W1030 19:54:12.865908  447486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.865929  447486 out.go:270] * 
	W1030 19:54:12.867124  447486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:54:12.871111  447486 out.go:201] 
	W1030 19:54:12.872534  447486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.872591  447486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:54:12.872616  447486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:54:12.874145  447486 out.go:201] 
	
	
	==> CRI-O <==
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.737353517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318054737334541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca6f1f84-e209-47d8-b8d0-401019dbc4a4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.737865375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa668ef9-8a15-4739-9150-549f728a5696 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.738015243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa668ef9-8a15-4739-9150-549f728a5696 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.738104742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aa668ef9-8a15-4739-9150-549f728a5696 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.773637439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7779dba7-d1ce-4d89-92a3-e2e89eb7df4c name=/runtime.v1.RuntimeService/Version
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.773738908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7779dba7-d1ce-4d89-92a3-e2e89eb7df4c name=/runtime.v1.RuntimeService/Version
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.775313423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca7b2cea-8f31-4b35-95ee-944bad43037a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.775917962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318054775876147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca7b2cea-8f31-4b35-95ee-944bad43037a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.776459344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=874900f9-7c50-4e5c-bbb4-9f2668e52555 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.776546426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=874900f9-7c50-4e5c-bbb4-9f2668e52555 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.776593836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=874900f9-7c50-4e5c-bbb4-9f2668e52555 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.809983986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a6768f9-c061-4939-83dd-0cad0317d6a3 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.810057211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a6768f9-c061-4939-83dd-0cad0317d6a3 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.811462369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8eef21fa-15ad-4a06-9564-d28e3b4ee803 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.811895621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318054811872653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8eef21fa-15ad-4a06-9564-d28e3b4ee803 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.812507493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=966c819f-5a08-4ee0-b807-a3a0be817a27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.812555358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=966c819f-5a08-4ee0-b807-a3a0be817a27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.812584753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=966c819f-5a08-4ee0-b807-a3a0be817a27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.846691511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1d761fa-5468-44c1-8378-8c5df2d85a3c name=/runtime.v1.RuntimeService/Version
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.846822454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1d761fa-5468-44c1-8378-8c5df2d85a3c name=/runtime.v1.RuntimeService/Version
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.848162667Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bf12cdc-a65e-4227-9aac-29c83f2e6d3d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.848514530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318054848484657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bf12cdc-a65e-4227-9aac-29c83f2e6d3d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.849069981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cbc7aea-cda1-413e-a61c-d72e32d13607 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.849119844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cbc7aea-cda1-413e-a61c-d72e32d13607 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:54:14 old-k8s-version-516975 crio[630]: time="2024-10-30 19:54:14.849152029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6cbc7aea-cda1-413e-a61c-d72e32d13607 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct30 19:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055573] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039872] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137495] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.588302] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607660] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct30 19:46] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.060505] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061237] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.181319] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.145340] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.258638] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.609500] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.068837] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.029529] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.374948] kauditd_printk_skb: 46 callbacks suppressed
	[Oct30 19:50] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Oct30 19:52] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +0.064946] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:54:15 up 8 min,  0 users,  load average: 0.16, 0.10, 0.04
	Linux old-k8s-version-516975 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000bffa70)
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: goroutine 160 [select]:
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c67ef0, 0x4f0ac20, 0xc000c0f090, 0x1, 0xc0001000c0)
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000bbc2a0, 0xc0001000c0)
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000be4800, 0xc000bfd920)
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 30 19:54:12 old-k8s-version-516975 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 30 19:54:12 old-k8s-version-516975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 30 19:54:12 old-k8s-version-516975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 30 19:54:12 old-k8s-version-516975 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 30 19:54:12 old-k8s-version-516975 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5600]: I1030 19:54:12.954167    5600 server.go:416] Version: v1.20.0
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5600]: I1030 19:54:12.954503    5600 server.go:837] Client rotation is on, will bootstrap in background
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5600]: I1030 19:54:12.957066    5600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5600]: W1030 19:54:12.958385    5600 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 30 19:54:12 old-k8s-version-516975 kubelet[5600]: I1030 19:54:12.958491    5600 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (238.646288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-516975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (724.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1030 19:50:18.708870  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:50:21.856145  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:50:43.959005  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-30 19:59:07.485399895 +0000 UTC m=+5904.232583432
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-768989 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-768989 logs -n 25: (2.062760661s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-534248 sudo cat                              | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo find                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo crio                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-534248                                       | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:42:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:42:11.799298  447486 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:42:11.799434  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799444  447486 out.go:358] Setting ErrFile to fd 2...
	I1030 19:42:11.799448  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799628  447486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:42:11.800193  447486 out.go:352] Setting JSON to false
	I1030 19:42:11.801205  447486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12275,"bootTime":1730305057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:42:11.801318  447486 start.go:139] virtualization: kvm guest
	I1030 19:42:11.803677  447486 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:42:11.805274  447486 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:42:11.805300  447486 notify.go:220] Checking for updates...
	I1030 19:42:11.808043  447486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:42:11.809440  447486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:42:11.810604  447486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:42:11.811774  447486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:42:11.812958  447486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:42:11.814552  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:42:11.814994  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.815077  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.830315  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1030 19:42:11.830795  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.831345  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.831365  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.831692  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.831869  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.833718  447486 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:42:11.835019  447486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:42:11.835371  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.835416  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.850097  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1030 19:42:11.850532  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.850964  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.850978  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.851321  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.851541  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.886920  447486 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:42:11.888376  447486 start.go:297] selected driver: kvm2
	I1030 19:42:11.888392  447486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.888538  447486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:42:11.889472  447486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.889560  447486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:42:11.904007  447486 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:42:11.904405  447486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:42:11.904443  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:42:11.904494  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:42:11.904549  447486 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.904661  447486 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.907302  447486 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:42:10.622770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:11.908430  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:42:11.908474  447486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:42:11.908485  447486 cache.go:56] Caching tarball of preloaded images
	I1030 19:42:11.908564  447486 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:42:11.908575  447486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:42:11.908666  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:42:11.908832  447486 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:42:16.702732  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:19.774825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:25.854777  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:28.926846  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:35.006934  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:38.078752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:44.158848  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:47.230843  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:53.310763  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:56.382772  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:02.462818  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:05.534754  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:11.614801  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:14.686762  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:20.766767  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:23.838853  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:29.918782  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:32.990752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:39.070771  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:42.142716  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:48.222814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:51.294775  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:57.374780  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:00.446825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:06.526810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:09.598813  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:15.678770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:18.750751  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:24.830814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:27.902810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:33.982759  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:37.054791  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:43.134706  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:46.206802  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:52.286830  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:55.358809  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:01.438753  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:04.510854  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:07.515699  446887 start.go:364] duration metric: took 4m29.000646378s to acquireMachinesLock for "default-k8s-diff-port-768989"
	I1030 19:45:07.515764  446887 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:07.515773  446887 fix.go:54] fixHost starting: 
	I1030 19:45:07.516191  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:07.516238  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:07.532374  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I1030 19:45:07.532907  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:07.533433  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:07.533459  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:07.533790  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:07.534016  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:07.534220  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:07.535802  446887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-768989: state=Stopped err=<nil>
	I1030 19:45:07.535842  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	W1030 19:45:07.536016  446887 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:07.537809  446887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-768989" ...
	I1030 19:45:07.539184  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Start
	I1030 19:45:07.539361  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring networks are active...
	I1030 19:45:07.540025  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network default is active
	I1030 19:45:07.540408  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network mk-default-k8s-diff-port-768989 is active
	I1030 19:45:07.540867  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Getting domain xml...
	I1030 19:45:07.541489  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Creating domain...
	I1030 19:45:07.512810  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:07.512848  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513191  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:45:07.513223  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513458  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:45:07.515538  446736 machine.go:96] duration metric: took 4m37.420773403s to provisionDockerMachine
	I1030 19:45:07.515594  446736 fix.go:56] duration metric: took 4m37.443968478s for fixHost
	I1030 19:45:07.515600  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 4m37.443992524s
	W1030 19:45:07.515625  446736 start.go:714] error starting host: provision: host is not running
	W1030 19:45:07.515753  446736 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1030 19:45:07.515763  446736 start.go:729] Will try again in 5 seconds ...
	I1030 19:45:08.756310  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting to get IP...
	I1030 19:45:08.757242  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757624  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757747  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.757629  448092 retry.go:31] will retry after 202.103853ms: waiting for machine to come up
	I1030 19:45:08.961147  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961660  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961685  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.961606  448092 retry.go:31] will retry after 243.456761ms: waiting for machine to come up
	I1030 19:45:09.207134  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207539  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207582  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.207493  448092 retry.go:31] will retry after 375.017051ms: waiting for machine to come up
	I1030 19:45:09.584058  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584428  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.584373  448092 retry.go:31] will retry after 552.476692ms: waiting for machine to come up
	I1030 19:45:10.137989  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138421  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.138358  448092 retry.go:31] will retry after 560.865483ms: waiting for machine to come up
	I1030 19:45:10.700603  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700968  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.700920  448092 retry.go:31] will retry after 680.400693ms: waiting for machine to come up
	I1030 19:45:11.382861  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383336  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383362  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:11.383274  448092 retry.go:31] will retry after 787.136113ms: waiting for machine to come up
	I1030 19:45:12.171550  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171910  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171938  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:12.171853  448092 retry.go:31] will retry after 1.176474969s: waiting for machine to come up
	I1030 19:45:13.349617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350080  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350114  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:13.350042  448092 retry.go:31] will retry after 1.211573437s: waiting for machine to come up
	I1030 19:45:12.517265  446736 start.go:360] acquireMachinesLock for no-preload-960512: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:45:14.563397  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563805  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:14.563749  448092 retry.go:31] will retry after 1.625938777s: waiting for machine to come up
	I1030 19:45:16.191798  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192226  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192255  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:16.192188  448092 retry.go:31] will retry after 2.442949682s: waiting for machine to come up
	I1030 19:45:18.636342  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636768  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636812  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:18.636748  448092 retry.go:31] will retry after 2.48415211s: waiting for machine to come up
	I1030 19:45:21.124407  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124892  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124919  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:21.124843  448092 retry.go:31] will retry after 3.392637796s: waiting for machine to come up
	I1030 19:45:25.815539  446965 start.go:364] duration metric: took 4m42.694254153s to acquireMachinesLock for "embed-certs-042402"
	I1030 19:45:25.815623  446965 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:25.815635  446965 fix.go:54] fixHost starting: 
	I1030 19:45:25.816068  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:25.816232  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:25.833218  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 19:45:25.833610  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:25.834159  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:45:25.834191  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:25.834567  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:25.834777  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:25.834920  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:45:25.836507  446965 fix.go:112] recreateIfNeeded on embed-certs-042402: state=Stopped err=<nil>
	I1030 19:45:25.836532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	W1030 19:45:25.836711  446965 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:25.839078  446965 out.go:177] * Restarting existing kvm2 VM for "embed-certs-042402" ...
	I1030 19:45:24.519725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520072  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Found IP for machine: 192.168.39.92
	I1030 19:45:24.520091  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserving static IP address...
	I1030 19:45:24.520113  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has current primary IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520507  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.520521  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserved static IP address: 192.168.39.92
	I1030 19:45:24.520535  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | skip adding static IP to network mk-default-k8s-diff-port-768989 - found existing host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"}
	I1030 19:45:24.520545  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for SSH to be available...
	I1030 19:45:24.520560  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Getting to WaitForSSH function...
	I1030 19:45:24.522776  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523095  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.523127  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523209  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH client type: external
	I1030 19:45:24.523229  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa (-rw-------)
	I1030 19:45:24.523262  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:24.523283  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | About to run SSH command:
	I1030 19:45:24.523298  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | exit 0
	I1030 19:45:24.646297  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:24.646826  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetConfigRaw
	I1030 19:45:24.647589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:24.650093  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650532  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.650564  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650790  446887 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/config.json ...
	I1030 19:45:24.650984  446887 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:24.651005  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:24.651232  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.653396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653751  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.653781  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.654084  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654263  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.654677  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.654922  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.654935  446887 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:24.762586  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:24.762621  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.762898  446887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-768989"
	I1030 19:45:24.762936  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.763250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.765937  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766265  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.766289  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766398  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.766599  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766762  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766920  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.767087  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.767257  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.767269  446887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-768989 && echo "default-k8s-diff-port-768989" | sudo tee /etc/hostname
	I1030 19:45:24.888742  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-768989
	
	I1030 19:45:24.888771  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.891326  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891638  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.891691  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891804  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.892018  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892154  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892281  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.892498  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.892692  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.892716  446887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-768989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-768989/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-768989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:25.012173  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:25.012214  446887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:25.012240  446887 buildroot.go:174] setting up certificates
	I1030 19:45:25.012250  446887 provision.go:84] configureAuth start
	I1030 19:45:25.012280  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:25.012598  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.015106  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015430  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.015458  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.017810  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018099  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.018136  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018230  446887 provision.go:143] copyHostCerts
	I1030 19:45:25.018322  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:25.018334  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:25.018401  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:25.018553  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:25.018566  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:25.018634  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:25.018716  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:25.018724  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:25.018748  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:25.018798  446887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-768989 san=[127.0.0.1 192.168.39.92 default-k8s-diff-port-768989 localhost minikube]
	I1030 19:45:25.188186  446887 provision.go:177] copyRemoteCerts
	I1030 19:45:25.188246  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:25.188285  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.190995  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.191344  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191525  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.191718  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.191875  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.191991  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.277273  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1030 19:45:25.300302  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:45:25.322919  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:25.347214  446887 provision.go:87] duration metric: took 334.947897ms to configureAuth
	I1030 19:45:25.347246  446887 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:25.347432  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:25.347510  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.349988  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350294  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.350324  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350500  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.350704  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.350836  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.351015  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.351210  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.351421  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.351436  446887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:25.576481  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:25.576509  446887 machine.go:96] duration metric: took 925.509257ms to provisionDockerMachine
	I1030 19:45:25.576525  446887 start.go:293] postStartSetup for "default-k8s-diff-port-768989" (driver="kvm2")
	I1030 19:45:25.576562  446887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:25.576589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.576923  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:25.576951  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.579498  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579825  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.579841  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579980  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.580151  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.580320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.580453  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.665032  446887 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:25.669402  446887 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:25.669430  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:25.669500  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:25.669573  446887 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:25.669665  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:25.679070  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:25.703131  446887 start.go:296] duration metric: took 126.586543ms for postStartSetup
	I1030 19:45:25.703194  446887 fix.go:56] duration metric: took 18.187420989s for fixHost
	I1030 19:45:25.703217  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.705911  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706365  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.706396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706609  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.706800  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.706944  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.707052  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.707188  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.707428  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.707443  446887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:25.815370  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317525.786848764
	
	I1030 19:45:25.815406  446887 fix.go:216] guest clock: 1730317525.786848764
	I1030 19:45:25.815414  446887 fix.go:229] Guest: 2024-10-30 19:45:25.786848764 +0000 UTC Remote: 2024-10-30 19:45:25.703198163 +0000 UTC m=+287.327380555 (delta=83.650601ms)
	I1030 19:45:25.815439  446887 fix.go:200] guest clock delta is within tolerance: 83.650601ms
	I1030 19:45:25.815445  446887 start.go:83] releasing machines lock for "default-k8s-diff-port-768989", held for 18.299702226s
	I1030 19:45:25.815467  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.815737  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.818508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818851  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.818889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818987  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819477  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819671  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819808  446887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:25.819862  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.819900  446887 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:25.819930  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.822372  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.822754  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822774  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822887  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823109  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.823168  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.823330  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823429  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823506  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.823605  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823758  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823880  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.903488  446887 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:25.931046  446887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:26.077178  446887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:26.084282  446887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:26.084358  446887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:26.100869  446887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:26.100893  446887 start.go:495] detecting cgroup driver to use...
	I1030 19:45:26.100984  446887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:26.117006  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:26.130102  446887 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:26.130184  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:26.148540  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:26.163003  446887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:26.286433  446887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:26.444862  446887 docker.go:233] disabling docker service ...
	I1030 19:45:26.444931  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:26.460606  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:26.477159  446887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:26.600212  446887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:26.725587  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:26.741934  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:26.761815  446887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:26.761872  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.772368  446887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:26.772422  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.784279  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.795403  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.806323  446887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:26.821929  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.836574  446887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.857305  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.868135  446887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:26.878058  446887 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:26.878138  446887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:26.891979  446887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:26.902181  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:27.021858  446887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:27.118890  446887 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:27.118985  446887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:27.125407  446887 start.go:563] Will wait 60s for crictl version
	I1030 19:45:27.125472  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:45:27.129507  446887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:27.176630  446887 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:27.176739  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.205818  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.236431  446887 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:25.840689  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Start
	I1030 19:45:25.840860  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring networks are active...
	I1030 19:45:25.841604  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network default is active
	I1030 19:45:25.841928  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network mk-embed-certs-042402 is active
	I1030 19:45:25.842443  446965 main.go:141] libmachine: (embed-certs-042402) Getting domain xml...
	I1030 19:45:25.843267  446965 main.go:141] libmachine: (embed-certs-042402) Creating domain...
	I1030 19:45:27.094878  446965 main.go:141] libmachine: (embed-certs-042402) Waiting to get IP...
	I1030 19:45:27.095705  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.096101  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.096166  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.096079  448226 retry.go:31] will retry after 190.217394ms: waiting for machine to come up
	I1030 19:45:27.287473  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.287940  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.287966  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.287899  448226 retry.go:31] will retry after 365.943545ms: waiting for machine to come up
	I1030 19:45:27.655952  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.656374  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.656425  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.656343  448226 retry.go:31] will retry after 345.369581ms: waiting for machine to come up
	I1030 19:45:28.003856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.004367  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.004398  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.004319  448226 retry.go:31] will retry after 609.6218ms: waiting for machine to come up
	I1030 19:45:27.237629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:27.240387  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240733  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:27.240779  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240995  446887 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:27.245263  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:27.261305  446887 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:27.261440  446887 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:27.261489  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:27.301593  446887 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:27.301650  446887 ssh_runner.go:195] Run: which lz4
	I1030 19:45:27.305829  446887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:27.310384  446887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:27.310413  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:28.615219  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.615769  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.615795  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.615716  448226 retry.go:31] will retry after 672.090411ms: waiting for machine to come up
	I1030 19:45:29.289646  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:29.290179  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:29.290216  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:29.290105  448226 retry.go:31] will retry after 865.239242ms: waiting for machine to come up
	I1030 19:45:30.157223  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.157650  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.157679  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.157616  448226 retry.go:31] will retry after 833.557181ms: waiting for machine to come up
	I1030 19:45:30.993139  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.993663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.993720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.993625  448226 retry.go:31] will retry after 989.333841ms: waiting for machine to come up
	I1030 19:45:31.983978  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:31.984498  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:31.984546  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:31.984443  448226 retry.go:31] will retry after 1.534311856s: waiting for machine to come up
	I1030 19:45:28.730765  446887 crio.go:462] duration metric: took 1.424975563s to copy over tarball
	I1030 19:45:28.730868  446887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:30.907494  446887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1765829s)
	I1030 19:45:30.907536  446887 crio.go:469] duration metric: took 2.176738354s to extract the tarball
	I1030 19:45:30.907546  446887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:30.944242  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:30.986812  446887 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:30.986839  446887 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:30.986872  446887 kubeadm.go:934] updating node { 192.168.39.92 8444 v1.31.2 crio true true} ...
	I1030 19:45:30.987042  446887 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-768989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:30.987145  446887 ssh_runner.go:195] Run: crio config
	I1030 19:45:31.037466  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:31.037496  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:31.037511  446887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:31.037544  446887 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-768989 NodeName:default-k8s-diff-port-768989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:31.037735  446887 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-768989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:31.037815  446887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:31.047808  446887 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:31.047885  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:31.057074  446887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1030 19:45:31.073022  446887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:31.088919  446887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1030 19:45:31.105357  446887 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:31.109207  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:31.121329  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:31.234078  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:31.251028  446887 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989 for IP: 192.168.39.92
	I1030 19:45:31.251057  446887 certs.go:194] generating shared ca certs ...
	I1030 19:45:31.251080  446887 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:31.251287  446887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:31.251342  446887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:31.251354  446887 certs.go:256] generating profile certs ...
	I1030 19:45:31.251480  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/client.key
	I1030 19:45:31.251567  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key.eeeafde8
	I1030 19:45:31.251620  446887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key
	I1030 19:45:31.251788  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:31.251834  446887 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:31.251848  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:31.251888  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:31.251931  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:31.251963  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:31.252024  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:31.253127  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:31.293822  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:31.334804  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:31.366955  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:31.396042  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 19:45:31.428748  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1030 19:45:31.452866  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:31.476407  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:45:31.500375  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:31.523909  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:31.547532  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:31.571163  446887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:31.587969  446887 ssh_runner.go:195] Run: openssl version
	I1030 19:45:31.593866  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:31.604538  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609348  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609419  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.615446  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:31.626640  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:31.640948  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646702  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646751  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.654365  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:31.668538  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:31.679201  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683631  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683693  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.689362  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:31.699804  446887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:31.704445  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:31.710558  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:31.718563  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:31.724745  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:31.731125  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:31.736828  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:31.742434  446887 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:31.742604  446887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:31.742654  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.779319  446887 cri.go:89] found id: ""
	I1030 19:45:31.779416  446887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:31.789556  446887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:31.789576  446887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:31.789622  446887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:31.799817  446887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:31.800824  446887 kubeconfig.go:125] found "default-k8s-diff-port-768989" server: "https://192.168.39.92:8444"
	I1030 19:45:31.803207  446887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:31.812876  446887 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I1030 19:45:31.812909  446887 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:31.812924  446887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:31.812984  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.858070  446887 cri.go:89] found id: ""
	I1030 19:45:31.858174  446887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:31.874923  446887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:31.885243  446887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:31.885275  446887 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:31.885321  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1030 19:45:31.894394  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:31.894453  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:31.903760  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1030 19:45:31.912344  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:31.912410  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:31.921458  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.930426  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:31.930499  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.940008  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1030 19:45:31.949578  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:31.949645  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:31.959022  446887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:31.968457  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.069017  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.985574  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.191887  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.273266  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.400584  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:33.400686  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:33.520596  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:33.521020  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:33.521041  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:33.520992  448226 retry.go:31] will retry after 1.787777673s: waiting for machine to come up
	I1030 19:45:35.310399  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:35.310878  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:35.310906  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:35.310833  448226 retry.go:31] will retry after 2.264310439s: waiting for machine to come up
	I1030 19:45:37.577787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:37.578276  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:37.578310  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:37.578214  448226 retry.go:31] will retry after 2.384410161s: waiting for machine to come up
	I1030 19:45:33.901397  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.400978  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.901476  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.401772  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.420824  446887 api_server.go:72] duration metric: took 2.020238714s to wait for apiserver process to appear ...
	I1030 19:45:35.420862  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:35.420889  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.795897  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.795931  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.795948  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.848032  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.848069  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.921286  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.930778  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:37.930822  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.421866  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.429247  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.429291  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.921655  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.928650  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.928680  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:39.421195  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:39.425565  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:45:39.433509  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:39.433543  446887 api_server.go:131] duration metric: took 4.01267362s to wait for apiserver health ...
	I1030 19:45:39.433555  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:39.433564  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:39.435645  446887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:39.437042  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:39.456091  446887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:39.477617  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:39.485998  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:39.486041  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:39.486051  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:39.486061  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:39.486071  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:39.486082  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:45:39.486087  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:39.486092  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:39.486095  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:45:39.486101  446887 system_pods.go:74] duration metric: took 8.467537ms to wait for pod list to return data ...
	I1030 19:45:39.486110  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:39.490771  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:39.490793  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:39.490805  446887 node_conditions.go:105] duration metric: took 4.690594ms to run NodePressure ...
	I1030 19:45:39.490821  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:39.752369  446887 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757080  446887 kubeadm.go:739] kubelet initialised
	I1030 19:45:39.757105  446887 kubeadm.go:740] duration metric: took 4.707251ms waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757114  446887 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:39.762374  446887 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.766904  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766934  446887 pod_ready.go:82] duration metric: took 4.529466ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.766948  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766958  446887 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.771681  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771705  446887 pod_ready.go:82] duration metric: took 4.73772ms for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.771715  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771722  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.776170  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776199  446887 pod_ready.go:82] duration metric: took 4.470353ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.776211  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776220  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.881949  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.881988  446887 pod_ready.go:82] duration metric: took 105.756203ms for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.882027  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.882042  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.281665  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281703  446887 pod_ready.go:82] duration metric: took 399.651747ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.281716  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281725  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.680827  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680861  446887 pod_ready.go:82] duration metric: took 399.128654ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.680873  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680883  446887 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:41.086176  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086203  446887 pod_ready.go:82] duration metric: took 405.311117ms for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:41.086216  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086225  446887 pod_ready.go:39] duration metric: took 1.32910228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:41.086246  446887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:45:41.100836  446887 ops.go:34] apiserver oom_adj: -16
	I1030 19:45:41.100871  446887 kubeadm.go:597] duration metric: took 9.31128777s to restartPrimaryControlPlane
	I1030 19:45:41.100887  446887 kubeadm.go:394] duration metric: took 9.358460424s to StartCluster
	I1030 19:45:41.100915  446887 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.101046  446887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:45:41.103578  446887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.103910  446887 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:45:41.103995  446887 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:45:41.104111  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:41.104131  446887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104151  446887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104159  446887 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:45:41.104175  446887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104198  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104207  446887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104218  446887 addons.go:243] addon metrics-server should already be in state true
	I1030 19:45:41.104153  446887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104255  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104258  446887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-768989"
	I1030 19:45:41.104672  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104683  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104694  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104718  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104728  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104730  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.105606  446887 out.go:177] * Verifying Kubernetes components...
	I1030 19:45:41.107136  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:41.121415  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I1030 19:45:41.122053  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.122694  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.122721  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.123073  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.123682  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.123733  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.125497  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1030 19:45:41.125546  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I1030 19:45:41.125878  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.125962  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.126425  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126445  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126465  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126507  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126840  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.126897  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.127362  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.127392  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.127590  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.131397  446887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.131424  446887 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:45:41.131457  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.131834  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.131877  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.143183  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1030 19:45:41.143221  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I1030 19:45:41.143628  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.143765  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.144231  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144249  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144369  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144392  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144657  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144766  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144879  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.144926  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.146739  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.146913  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.148740  446887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:45:41.148794  446887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:45:41.149853  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1030 19:45:41.150250  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.150397  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:45:41.150435  446887 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:45:41.150462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150525  446887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.150545  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:45:41.150562  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150763  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.150781  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.151168  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.152135  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.152184  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.154133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154425  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154625  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.154654  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154811  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.154996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155033  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.155059  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.155145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.155310  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.155345  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155464  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155548  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.168971  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1030 19:45:41.169445  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.169946  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.169969  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.170335  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.170508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.172162  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.172378  446887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.172394  446887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:45:41.172410  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.175214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.175643  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175795  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.175978  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.176133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.176301  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.324093  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:41.381986  446887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:41.439497  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:45:41.439522  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:45:41.448751  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.486707  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:45:41.486736  446887 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:45:41.514478  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.514513  446887 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:45:41.546821  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.590509  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.879189  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879224  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879548  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:41.879597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879608  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.879622  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879632  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879868  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879886  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.889008  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.889024  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.889273  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.889290  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499223  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499621  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499632  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499689  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499969  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499984  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499996  446887 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-768989"
	I1030 19:45:42.598713  446887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008157275s)
	I1030 19:45:42.598770  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.598782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599088  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599109  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.599117  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.599143  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:42.599201  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599447  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599461  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.601840  446887 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1030 19:45:39.963885  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:39.964308  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:39.964346  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:39.964250  448226 retry.go:31] will retry after 4.32150593s: waiting for machine to come up
	I1030 19:45:42.603197  446887 addons.go:510] duration metric: took 1.499214294s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1030 19:45:43.386074  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:45.631177  447486 start.go:364] duration metric: took 3m33.722307877s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:45:45.631272  447486 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:45.631284  447486 fix.go:54] fixHost starting: 
	I1030 19:45:45.631708  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:45.631767  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:45.648654  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1030 19:45:45.649098  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:45.649552  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:45:45.649574  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:45.649848  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:45.650005  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:45:45.650153  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:45:45.651624  447486 fix.go:112] recreateIfNeeded on old-k8s-version-516975: state=Stopped err=<nil>
	I1030 19:45:45.651661  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	W1030 19:45:45.651805  447486 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:45.654065  447486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	I1030 19:45:45.655382  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .Start
	I1030 19:45:45.655554  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:45:45.656134  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:45:45.656518  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:45:45.656885  447486 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:45:45.657501  447486 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:45:44.289530  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289944  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has current primary IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289965  446965 main.go:141] libmachine: (embed-certs-042402) Found IP for machine: 192.168.61.235
	I1030 19:45:44.289978  446965 main.go:141] libmachine: (embed-certs-042402) Reserving static IP address...
	I1030 19:45:44.290419  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.290450  446965 main.go:141] libmachine: (embed-certs-042402) Reserved static IP address: 192.168.61.235
	I1030 19:45:44.290469  446965 main.go:141] libmachine: (embed-certs-042402) DBG | skip adding static IP to network mk-embed-certs-042402 - found existing host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"}
	I1030 19:45:44.290502  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Getting to WaitForSSH function...
	I1030 19:45:44.290519  446965 main.go:141] libmachine: (embed-certs-042402) Waiting for SSH to be available...
	I1030 19:45:44.292418  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292684  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.292727  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292750  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH client type: external
	I1030 19:45:44.292785  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa (-rw-------)
	I1030 19:45:44.292839  446965 main.go:141] libmachine: (embed-certs-042402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:44.292856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | About to run SSH command:
	I1030 19:45:44.292873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | exit 0
	I1030 19:45:44.414810  446965 main.go:141] libmachine: (embed-certs-042402) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:44.415211  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetConfigRaw
	I1030 19:45:44.416039  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.418830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419269  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.419303  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419529  446965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/config.json ...
	I1030 19:45:44.419832  446965 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:44.419859  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:44.420102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.422359  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422704  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.422729  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422878  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.423072  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423217  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423355  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.423493  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.423677  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.423685  446965 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:44.527214  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:44.527248  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527526  446965 buildroot.go:166] provisioning hostname "embed-certs-042402"
	I1030 19:45:44.527562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527793  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.530474  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.530830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.530856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.531041  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.531243  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531432  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531563  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.531736  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.531958  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.531979  446965 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-042402 && echo "embed-certs-042402" | sudo tee /etc/hostname
	I1030 19:45:44.656963  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-042402
	
	I1030 19:45:44.656996  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.659958  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660361  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.660397  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660643  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.660842  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661122  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.661295  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.661469  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.661484  446965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-042402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-042402/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-042402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:44.771688  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:44.771728  446965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:44.771755  446965 buildroot.go:174] setting up certificates
	I1030 19:45:44.771766  446965 provision.go:84] configureAuth start
	I1030 19:45:44.771780  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.772120  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.774838  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775271  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.775298  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775424  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.777432  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777765  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.777793  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777910  446965 provision.go:143] copyHostCerts
	I1030 19:45:44.777990  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:44.778006  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:44.778057  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:44.778147  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:44.778155  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:44.778174  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:44.778229  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:44.778237  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:44.778253  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:44.778360  446965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.embed-certs-042402 san=[127.0.0.1 192.168.61.235 embed-certs-042402 localhost minikube]
	I1030 19:45:45.019172  446965 provision.go:177] copyRemoteCerts
	I1030 19:45:45.019234  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:45.019265  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.022052  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022402  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.022435  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022590  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.022788  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.022969  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.023123  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.104733  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:45.128256  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:45:45.150758  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:45:45.173233  446965 provision.go:87] duration metric: took 401.450922ms to configureAuth
	I1030 19:45:45.173268  446965 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:45.173465  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:45.173562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.176259  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.176698  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176826  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.177025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177190  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177364  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.177554  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.177724  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.177737  446965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:45.396562  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:45.396593  446965 machine.go:96] duration metric: took 976.740759ms to provisionDockerMachine
	I1030 19:45:45.396606  446965 start.go:293] postStartSetup for "embed-certs-042402" (driver="kvm2")
	I1030 19:45:45.396616  446965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:45.396644  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.397007  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:45.397048  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.399581  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.399930  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.399955  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.400045  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.400219  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.400373  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.400483  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.481722  446965 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:45.487207  446965 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:45.487231  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:45.487304  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:45.487398  446965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:45.487516  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:45.500340  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:45.524930  446965 start.go:296] duration metric: took 128.310254ms for postStartSetup
	I1030 19:45:45.524972  446965 fix.go:56] duration metric: took 19.709339085s for fixHost
	I1030 19:45:45.524993  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.527426  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527751  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.527775  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.528145  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528326  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528450  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.528591  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.528804  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.528815  446965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:45.630961  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317545.604586107
	
	I1030 19:45:45.630997  446965 fix.go:216] guest clock: 1730317545.604586107
	I1030 19:45:45.631020  446965 fix.go:229] Guest: 2024-10-30 19:45:45.604586107 +0000 UTC Remote: 2024-10-30 19:45:45.524975841 +0000 UTC m=+302.540999350 (delta=79.610266ms)
	I1030 19:45:45.631054  446965 fix.go:200] guest clock delta is within tolerance: 79.610266ms
	I1030 19:45:45.631062  446965 start.go:83] releasing machines lock for "embed-certs-042402", held for 19.81546348s
	I1030 19:45:45.631109  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.631396  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:45.634114  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634524  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.634558  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634739  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635353  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635646  446965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:45.635692  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.635746  446965 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:45.635775  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.638260  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638639  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.638694  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638718  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639108  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.639128  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.639160  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639260  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639371  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639440  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639509  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.639581  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639723  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.747515  446965 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:45.754851  446965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:45.904471  446965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:45.911348  446965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:45.911428  446965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:45.928273  446965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:45.928299  446965 start.go:495] detecting cgroup driver to use...
	I1030 19:45:45.928381  446965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:45.949100  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:45.963284  446965 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:45.963362  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:45.976952  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:45.991367  446965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:46.104670  446965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:46.254049  446965 docker.go:233] disabling docker service ...
	I1030 19:45:46.254130  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:46.273226  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:46.290211  446965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:46.491658  446965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:46.637447  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:46.654517  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:46.679786  446965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:46.679879  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.695487  446965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:46.695570  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.708974  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.724847  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.736912  446965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:46.749015  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.761190  446965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.780198  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.790865  446965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:46.800950  446965 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:46.801029  446965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:46.814792  446965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:46.825490  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:46.952367  446965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:47.054874  446965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:47.054962  446965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:47.061036  446965 start.go:563] Will wait 60s for crictl version
	I1030 19:45:47.061105  446965 ssh_runner.go:195] Run: which crictl
	I1030 19:45:47.064917  446965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:47.101690  446965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:47.101796  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.131286  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.166314  446965 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:47.167861  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:47.171097  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171438  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:47.171466  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171737  446965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:47.177796  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:47.191930  446965 kubeadm.go:883] updating cluster {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:47.192090  446965 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:47.192149  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:47.231586  446965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:47.231672  446965 ssh_runner.go:195] Run: which lz4
	I1030 19:45:47.236190  446965 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:47.240803  446965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:47.240888  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:45.386683  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:47.386771  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:48.387313  446887 node_ready.go:49] node "default-k8s-diff-port-768989" has status "Ready":"True"
	I1030 19:45:48.387344  446887 node_ready.go:38] duration metric: took 7.005318984s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:48.387359  446887 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:48.395198  446887 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401276  446887 pod_ready.go:93] pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:48.401306  446887 pod_ready.go:82] duration metric: took 6.071305ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401321  446887 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:47.003397  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:45:47.004281  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.004710  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.004787  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.004695  448432 retry.go:31] will retry after 234.659459ms: waiting for machine to come up
	I1030 19:45:47.241308  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.241838  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.241863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.241802  448432 retry.go:31] will retry after 350.804975ms: waiting for machine to come up
	I1030 19:45:47.594533  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.595106  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.595139  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.595044  448432 retry.go:31] will retry after 448.637889ms: waiting for machine to come up
	I1030 19:45:48.045858  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.046358  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.046386  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.046315  448432 retry.go:31] will retry after 543.947609ms: waiting for machine to come up
	I1030 19:45:48.592474  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.592908  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.592937  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.592875  448432 retry.go:31] will retry after 744.106735ms: waiting for machine to come up
	I1030 19:45:49.338345  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:49.338833  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:49.338857  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:49.338795  448432 retry.go:31] will retry after 927.743369ms: waiting for machine to come up
	I1030 19:45:50.267844  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:50.268359  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:50.268390  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:50.268324  448432 retry.go:31] will retry after 829.540351ms: waiting for machine to come up
	I1030 19:45:51.099379  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:51.099863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:51.099893  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:51.099820  448432 retry.go:31] will retry after 898.768304ms: waiting for machine to come up
	I1030 19:45:48.672337  446965 crio.go:462] duration metric: took 1.436158626s to copy over tarball
	I1030 19:45:48.672439  446965 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:50.859055  446965 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186572123s)
	I1030 19:45:50.859101  446965 crio.go:469] duration metric: took 2.186725028s to extract the tarball
	I1030 19:45:50.859113  446965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:50.896570  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:50.946526  446965 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:50.946558  446965 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:50.946567  446965 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.31.2 crio true true} ...
	I1030 19:45:50.946668  446965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-042402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:50.946748  446965 ssh_runner.go:195] Run: crio config
	I1030 19:45:50.992305  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:50.992337  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:50.992348  446965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:50.992374  446965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-042402 NodeName:embed-certs-042402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:50.992530  446965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-042402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:50.992616  446965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:51.002586  446965 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:51.002668  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:51.012058  446965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1030 19:45:51.028645  446965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:51.044912  446965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1030 19:45:51.060991  446965 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:51.064808  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:51.076790  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:51.205861  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:51.224763  446965 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402 for IP: 192.168.61.235
	I1030 19:45:51.224791  446965 certs.go:194] generating shared ca certs ...
	I1030 19:45:51.224812  446965 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:51.224986  446965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:51.225046  446965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:51.225059  446965 certs.go:256] generating profile certs ...
	I1030 19:45:51.225175  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/client.key
	I1030 19:45:51.225256  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key.f6f7691e
	I1030 19:45:51.225314  446965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key
	I1030 19:45:51.225469  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:51.225518  446965 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:51.225540  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:51.225574  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:51.225612  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:51.225651  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:51.225714  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:51.226718  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:51.278345  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:51.308707  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:51.349986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:51.382176  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1030 19:45:51.426538  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 19:45:51.457131  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:51.481165  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:45:51.505285  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:51.533986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:51.562660  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:51.586002  446965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:51.602544  446965 ssh_runner.go:195] Run: openssl version
	I1030 19:45:51.608479  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:51.620650  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625243  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625294  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.631138  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:51.643167  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:51.655128  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659528  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659600  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.665370  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:51.676314  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:51.687386  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692170  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692228  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.697897  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:51.709561  446965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:51.715357  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:51.723291  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:51.731362  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:51.739724  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:51.747383  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:51.753472  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:51.759462  446965 kubeadm.go:392] StartCluster: {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:51.759605  446965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:51.759702  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.806863  446965 cri.go:89] found id: ""
	I1030 19:45:51.806956  446965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:51.818195  446965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:51.818218  446965 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:51.818274  446965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:51.828762  446965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:51.830149  446965 kubeconfig.go:125] found "embed-certs-042402" server: "https://192.168.61.235:8443"
	I1030 19:45:51.832269  446965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:51.842769  446965 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.235
	I1030 19:45:51.842808  446965 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:51.842823  446965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:51.842889  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.887128  446965 cri.go:89] found id: ""
	I1030 19:45:51.887209  446965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:51.911918  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:51.922685  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:51.922714  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:51.922770  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:45:51.935548  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:51.935620  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:51.948635  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:45:51.961647  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:51.961745  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:51.975880  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:45:51.986852  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:51.986922  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:52.001290  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:45:52.015249  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:52.015333  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:52.026657  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:52.038560  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:52.167697  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:50.408274  446887 pod_ready.go:103] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:51.407818  446887 pod_ready.go:93] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.407850  446887 pod_ready.go:82] duration metric: took 3.006520689s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.407865  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413452  446887 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.413481  446887 pod_ready.go:82] duration metric: took 5.607077ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413495  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:52.000678  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:52.001196  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:52.001235  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:52.001148  448432 retry.go:31] will retry after 1.750749509s: waiting for machine to come up
	I1030 19:45:53.753607  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:53.754013  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:53.754038  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:53.753950  448432 retry.go:31] will retry after 1.537350682s: waiting for machine to come up
	I1030 19:45:55.293910  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:55.294396  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:55.294427  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:55.294336  448432 retry.go:31] will retry after 2.151521323s: waiting for machine to come up
	I1030 19:45:53.477258  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.309509141s)
	I1030 19:45:53.477309  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.696850  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.768419  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.863913  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:53.864018  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.364235  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.864820  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.887333  446965 api_server.go:72] duration metric: took 1.023419155s to wait for apiserver process to appear ...
	I1030 19:45:54.887363  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:54.887399  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:54.887929  446965 api_server.go:269] stopped: https://192.168.61.235:8443/healthz: Get "https://192.168.61.235:8443/healthz": dial tcp 192.168.61.235:8443: connect: connection refused
	I1030 19:45:55.388396  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.610916  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:57.610951  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:57.610972  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.745722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.745782  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:57.887887  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.895296  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.895352  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:54.167893  446887 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:54.920921  446887 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.920954  446887 pod_ready.go:82] duration metric: took 3.507449937s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.920974  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927123  446887 pod_ready.go:93] pod "kube-proxy-tsr5q" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.927150  446887 pod_ready.go:82] duration metric: took 6.167749ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927164  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932513  446887 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.932540  446887 pod_ready.go:82] duration metric: took 5.367579ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932557  446887 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:56.939174  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.388076  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.393192  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:58.393235  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:58.887710  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.891923  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:45:58.897783  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:58.897816  446965 api_server.go:131] duration metric: took 4.010443495s to wait for apiserver health ...
	I1030 19:45:58.897836  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:58.897844  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:58.899669  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:57.447894  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:57.448365  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:57.448392  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:57.448320  448432 retry.go:31] will retry after 2.439938206s: waiting for machine to come up
	I1030 19:45:59.889685  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:59.890166  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:59.890205  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:59.890113  448432 retry.go:31] will retry after 3.836080386s: waiting for machine to come up
	I1030 19:45:58.901122  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:58.924765  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:58.946342  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:58.956378  446965 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:58.956412  446965 system_pods.go:61] "coredns-7c65d6cfc9-tv6kc" [d752975e-e126-4d22-9b35-b9f57d1170b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:58.956419  446965 system_pods.go:61] "etcd-embed-certs-042402" [fa9b90f6-82b2-448a-ad86-9cbba45a4c2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:58.956427  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [48af3136-74d9-4062-bb9a-e48dafd311a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:58.956436  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [0ae60724-6634-464a-af2f-e08148fb3eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:58.956445  446965 system_pods.go:61] "kube-proxy-qwjr9" [309ee447-8d52-49e7-a805-2b7c0af2a3bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 19:45:58.956450  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [f82ff11e-8305-4d05-b370-fd89693e5ad1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:58.956454  446965 system_pods.go:61] "metrics-server-6867b74b74-4x9t6" [1160789d-9462-4d1d-9f84-5ded8394bd4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:58.956459  446965 system_pods.go:61] "storage-provisioner" [d1559440-b14a-4c2a-a52e-ba39afb01f94] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 19:45:58.956465  446965 system_pods.go:74] duration metric: took 10.103898ms to wait for pod list to return data ...
	I1030 19:45:58.956473  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:58.960150  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:58.960182  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:58.960195  446965 node_conditions.go:105] duration metric: took 3.712942ms to run NodePressure ...
	I1030 19:45:58.960219  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:59.284558  446965 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289073  446965 kubeadm.go:739] kubelet initialised
	I1030 19:45:59.289095  446965 kubeadm.go:740] duration metric: took 4.508144ms waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289104  446965 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:59.293538  446965 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:01.298780  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.940597  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:01.439118  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.011617  446736 start.go:364] duration metric: took 52.494265895s to acquireMachinesLock for "no-preload-960512"
	I1030 19:46:05.011674  446736 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:46:05.011683  446736 fix.go:54] fixHost starting: 
	I1030 19:46:05.012022  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:05.012087  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:05.029067  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I1030 19:46:05.029484  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:05.030010  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:05.030039  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:05.030461  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:05.030690  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:05.030854  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:05.032380  446736 fix.go:112] recreateIfNeeded on no-preload-960512: state=Stopped err=<nil>
	I1030 19:46:05.032408  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	W1030 19:46:05.032566  446736 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:46:05.035693  446736 out.go:177] * Restarting existing kvm2 VM for "no-preload-960512" ...
	I1030 19:46:03.727617  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728028  447486 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:46:03.728046  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:46:03.728062  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728565  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:46:03.728600  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:46:03.728616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.728639  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | skip adding static IP to network mk-old-k8s-version-516975 - found existing host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"}
	I1030 19:46:03.728657  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:46:03.730754  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731085  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.731121  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731145  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:46:03.731212  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:46:03.731252  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:03.731275  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:46:03.731289  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:46:03.862423  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:03.862832  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:46:03.863519  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:03.865977  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866262  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.866297  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866512  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:46:03.866755  447486 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:03.866783  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:03.866994  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.869079  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869384  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.869410  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869603  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.869787  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.869949  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.870102  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.870243  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.870468  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.870481  447486 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:03.982986  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:03.983018  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983285  447486 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:46:03.983319  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983502  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.986203  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986576  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.986615  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986765  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.986983  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987126  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987258  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.987419  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.987696  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.987719  447486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:46:04.112692  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:46:04.112719  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.115948  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116283  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.116309  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116482  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.116667  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116842  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116966  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.117104  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.117275  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.117290  447486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:04.235988  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:04.236032  447486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:04.236098  447486 buildroot.go:174] setting up certificates
	I1030 19:46:04.236111  447486 provision.go:84] configureAuth start
	I1030 19:46:04.236124  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:04.236500  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:04.239328  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.239707  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.239739  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.240009  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.242118  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242440  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.242505  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242683  447486 provision.go:143] copyHostCerts
	I1030 19:46:04.242766  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:04.242787  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:04.242847  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:04.242972  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:04.242986  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:04.243011  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:04.243072  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:04.243079  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:04.243095  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:04.243153  447486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:46:04.355003  447486 provision.go:177] copyRemoteCerts
	I1030 19:46:04.355061  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:04.355092  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.357788  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358153  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.358191  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358397  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.358630  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.358809  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.358970  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.446614  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:04.473708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:46:04.497721  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:46:04.521806  447486 provision.go:87] duration metric: took 285.682041ms to configureAuth
	I1030 19:46:04.521836  447486 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:04.521999  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:46:04.522072  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.524616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525034  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.525065  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525282  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.525452  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525616  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.525916  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.526129  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.526145  447486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:04.766663  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:04.766697  447486 machine.go:96] duration metric: took 899.924211ms to provisionDockerMachine
	I1030 19:46:04.766709  447486 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:46:04.766720  447486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:04.766745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:04.767081  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:04.767114  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.769995  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770401  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.770428  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770580  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.770762  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.770973  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.771132  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.858006  447486 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:04.862295  447486 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:04.862324  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:04.862387  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:04.862475  447486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:04.862612  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:04.872541  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:04.896306  447486 start.go:296] duration metric: took 129.577956ms for postStartSetup
	I1030 19:46:04.896360  447486 fix.go:56] duration metric: took 19.265077419s for fixHost
	I1030 19:46:04.896383  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.899009  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899397  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.899429  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899538  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.899739  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.899906  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.900101  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.900271  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.900510  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.900525  447486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:05.011439  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317564.967936408
	
	I1030 19:46:05.011464  447486 fix.go:216] guest clock: 1730317564.967936408
	I1030 19:46:05.011472  447486 fix.go:229] Guest: 2024-10-30 19:46:04.967936408 +0000 UTC Remote: 2024-10-30 19:46:04.896364572 +0000 UTC m=+233.135558535 (delta=71.571836ms)
	I1030 19:46:05.011516  447486 fix.go:200] guest clock delta is within tolerance: 71.571836ms
	I1030 19:46:05.011525  447486 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 19.380292064s
	I1030 19:46:05.011552  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.011853  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:05.014722  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015072  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.015100  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015225  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.015808  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016002  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016107  447486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:05.016155  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.016265  447486 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:05.016296  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.018976  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019189  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019326  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019370  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019517  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019604  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019632  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019708  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.019830  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019918  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.019995  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.020077  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.020157  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.020295  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.100852  447486 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:05.127673  447486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:05.279889  447486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:05.285900  447486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:05.285976  447486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:05.304763  447486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:05.304791  447486 start.go:495] detecting cgroup driver to use...
	I1030 19:46:05.304862  447486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:05.325729  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:05.343047  447486 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:05.343128  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:05.358748  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:05.374769  447486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:05.492589  447486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:05.639943  447486 docker.go:233] disabling docker service ...
	I1030 19:46:05.640039  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:05.655449  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:05.669688  447486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:05.814658  447486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:05.957944  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:05.972122  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:05.990577  447486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:46:05.990653  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.000834  447486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:06.000907  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.011678  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.022051  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.032515  447486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:06.043296  447486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:06.053123  447486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:06.053170  447486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:06.067625  447486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:06.081306  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:06.221181  447486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:06.321848  447486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:06.321926  447486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:06.329697  447486 start.go:563] Will wait 60s for crictl version
	I1030 19:46:06.329757  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:06.333980  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:06.381198  447486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:06.381290  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.410365  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.442329  447486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:46:06.443471  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:06.446233  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446621  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:06.446653  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446822  447486 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:06.451216  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:06.464477  447486 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:06.464607  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:46:06.464668  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:06.513123  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:06.513205  447486 ssh_runner.go:195] Run: which lz4
	I1030 19:46:06.517252  447486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:46:06.521358  447486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:46:06.521384  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:46:03.300213  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.301139  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.303015  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:03.939240  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.940212  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.942062  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.037179  446736 main.go:141] libmachine: (no-preload-960512) Calling .Start
	I1030 19:46:05.037388  446736 main.go:141] libmachine: (no-preload-960512) Ensuring networks are active...
	I1030 19:46:05.038384  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network default is active
	I1030 19:46:05.038793  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network mk-no-preload-960512 is active
	I1030 19:46:05.039208  446736 main.go:141] libmachine: (no-preload-960512) Getting domain xml...
	I1030 19:46:05.040083  446736 main.go:141] libmachine: (no-preload-960512) Creating domain...
	I1030 19:46:06.366674  446736 main.go:141] libmachine: (no-preload-960512) Waiting to get IP...
	I1030 19:46:06.367568  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.368016  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.368083  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.367984  448568 retry.go:31] will retry after 216.900908ms: waiting for machine to come up
	I1030 19:46:06.586638  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.587182  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.587213  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.587121  448568 retry.go:31] will retry after 319.082011ms: waiting for machine to come up
	I1030 19:46:06.907974  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.908650  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.908683  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.908581  448568 retry.go:31] will retry after 418.339306ms: waiting for machine to come up
	I1030 19:46:07.328241  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.329035  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.329065  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.328988  448568 retry.go:31] will retry after 523.624135ms: waiting for machine to come up
	I1030 19:46:07.855234  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.855944  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.855970  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.855849  448568 retry.go:31] will retry after 556.06146ms: waiting for machine to come up
	I1030 19:46:08.413474  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:08.414059  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:08.414098  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:08.413947  448568 retry.go:31] will retry after 713.043389ms: waiting for machine to come up
	I1030 19:46:09.128274  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:09.128737  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:09.128762  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:09.128689  448568 retry.go:31] will retry after 1.096111238s: waiting for machine to come up
	I1030 19:46:08.144772  447486 crio.go:462] duration metric: took 1.627547543s to copy over tarball
	I1030 19:46:08.144845  447486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:46:11.104192  447486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959302647s)
	I1030 19:46:11.104228  447486 crio.go:469] duration metric: took 2.959426051s to extract the tarball
	I1030 19:46:11.104240  447486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:46:11.146584  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:11.183766  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:11.183797  447486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:11.183889  447486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.183917  447486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.183932  447486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.183968  447486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.184087  447486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.183972  447486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:46:11.183969  447486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.183928  447486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.185976  447486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.186001  447486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:46:11.186043  447486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.186053  447486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.186046  447486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.185977  447486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.186108  447486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.186150  447486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.348134  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391191  447486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:46:11.391327  447486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391399  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.396693  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.400062  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.406656  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:46:11.410534  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.410590  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.441896  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.460400  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.482465  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.554431  447486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:46:11.554480  447486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.554549  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.610376  447486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:46:11.610424  447486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:46:11.610471  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616060  447486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:46:11.616104  447486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.616153  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616177  447486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:46:11.616217  447486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.616282  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.617473  447486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:46:11.617502  447486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.617535  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652124  447486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:46:11.652185  447486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.652228  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.652233  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652237  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.652331  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.652376  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.652433  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.652483  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.798844  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.798859  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:46:11.798873  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.798949  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.799075  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.799179  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.799182  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:08.303450  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.303482  446965 pod_ready.go:82] duration metric: took 9.009918893s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.303498  446965 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312186  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.312213  446965 pod_ready.go:82] duration metric: took 8.706192ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312228  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:10.320161  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.439107  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:12.439663  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.226842  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:10.227315  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:10.227346  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:10.227261  448568 retry.go:31] will retry after 1.165335625s: waiting for machine to come up
	I1030 19:46:11.394231  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:11.394817  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:11.394851  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:11.394763  448568 retry.go:31] will retry after 1.292571083s: waiting for machine to come up
	I1030 19:46:12.688486  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:12.688919  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:12.688965  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:12.688862  448568 retry.go:31] will retry after 1.97645889s: waiting for machine to come up
	I1030 19:46:14.667783  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:14.668245  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:14.668278  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:14.668200  448568 retry.go:31] will retry after 2.020488863s: waiting for machine to come up
	I1030 19:46:11.942258  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.942265  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.942365  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.942352  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.942421  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.946933  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:12.064951  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:46:12.067930  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:12.067990  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:46:12.068057  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:46:12.068078  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:46:12.083122  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:46:12.107265  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:46:13.402970  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:13.551979  447486 cache_images.go:92] duration metric: took 2.368158873s to LoadCachedImages
	W1030 19:46:13.552080  447486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:46:13.552096  447486 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:46:13.552211  447486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:13.552276  447486 ssh_runner.go:195] Run: crio config
	I1030 19:46:13.605982  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:46:13.606008  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:13.606020  447486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:13.606049  447486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:46:13.606223  447486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:13.606299  447486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:46:13.616954  447486 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:13.617034  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:13.627440  447486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:46:13.644821  447486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:13.662070  447486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:46:13.679198  447486 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:13.682992  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:13.697879  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:13.819975  447486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:13.838669  447486 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:46:13.838695  447486 certs.go:194] generating shared ca certs ...
	I1030 19:46:13.838716  447486 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:13.838888  447486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:13.838946  447486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:13.838962  447486 certs.go:256] generating profile certs ...
	I1030 19:46:13.839064  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:46:13.839149  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:46:13.839208  447486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:46:13.839375  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:13.839429  447486 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:13.839442  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:13.839476  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:13.839509  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:13.839545  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:13.839609  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:13.840381  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:13.868947  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:13.923848  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:13.973167  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:14.009333  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:46:14.042397  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:14.073927  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:14.109209  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:46:14.135708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:14.162145  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:14.186176  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:14.210362  447486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:14.228727  447486 ssh_runner.go:195] Run: openssl version
	I1030 19:46:14.234436  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:14.245497  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250026  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250077  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.255727  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:14.266674  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:14.277813  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282378  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282435  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.288338  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:14.300057  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:14.312295  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317488  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317555  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.323518  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:14.335182  447486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:14.339998  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:14.346145  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:14.352474  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:14.358687  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:14.364275  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:14.370038  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:14.376051  447486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:14.376144  447486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:14.376187  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.423395  447486 cri.go:89] found id: ""
	I1030 19:46:14.423477  447486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:14.435404  447486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:14.435485  447486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:14.435558  447486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:14.448035  447486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:14.448911  447486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:14.449557  447486 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-516975" cluster setting kubeconfig missing "old-k8s-version-516975" context setting]
	I1030 19:46:14.450419  447486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:14.452252  447486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:14.462634  447486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I1030 19:46:14.462676  447486 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:14.462693  447486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:14.462750  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.508286  447486 cri.go:89] found id: ""
	I1030 19:46:14.508380  447486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:14.527996  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:14.539011  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:14.539037  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:14.539094  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:14.550159  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:14.550243  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:14.561350  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:14.571353  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:14.571430  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:14.584480  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.598307  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:14.598400  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.611632  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:14.621644  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:14.621705  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:14.632161  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:14.642295  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:14.783130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.694839  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.923329  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.052124  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.143607  447486 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:16.143710  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:16.643943  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:13.245727  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:13.702440  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.702472  446965 pod_ready.go:82] duration metric: took 5.390235543s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.702497  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948519  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.948549  446965 pod_ready.go:82] duration metric: took 246.042214ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948565  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958077  446965 pod_ready.go:93] pod "kube-proxy-qwjr9" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.958108  446965 pod_ready.go:82] duration metric: took 9.534813ms for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958122  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974906  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.974931  446965 pod_ready.go:82] duration metric: took 16.800547ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974944  446965 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:15.982433  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:17.983261  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:14.440176  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.939769  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.690435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:16.690908  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:16.690997  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:16.690904  448568 retry.go:31] will retry after 2.729556206s: waiting for machine to come up
	I1030 19:46:19.423740  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:19.424246  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:19.424271  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:19.424195  448568 retry.go:31] will retry after 2.822049517s: waiting for machine to come up
	I1030 19:46:17.144678  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.644772  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.144037  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.644437  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.144273  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.643801  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.144200  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.644764  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.143898  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.643960  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.481213  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.981619  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:19.438946  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:21.938706  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.247395  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:22.247840  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:22.247869  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:22.247813  448568 retry.go:31] will retry after 5.243633747s: waiting for machine to come up
	I1030 19:46:22.144625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.644446  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.144207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.644001  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.143787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.644166  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.144397  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.644654  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.144214  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.644275  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.482032  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.981111  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:23.940402  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:26.439369  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.494630  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495107  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has current primary IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495146  446736 main.go:141] libmachine: (no-preload-960512) Found IP for machine: 192.168.72.132
	I1030 19:46:27.495159  446736 main.go:141] libmachine: (no-preload-960512) Reserving static IP address...
	I1030 19:46:27.495588  446736 main.go:141] libmachine: (no-preload-960512) Reserved static IP address: 192.168.72.132
	I1030 19:46:27.495612  446736 main.go:141] libmachine: (no-preload-960512) Waiting for SSH to be available...
	I1030 19:46:27.495634  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.495664  446736 main.go:141] libmachine: (no-preload-960512) DBG | skip adding static IP to network mk-no-preload-960512 - found existing host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"}
	I1030 19:46:27.495678  446736 main.go:141] libmachine: (no-preload-960512) DBG | Getting to WaitForSSH function...
	I1030 19:46:27.497679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498051  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.498083  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498231  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH client type: external
	I1030 19:46:27.498273  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa (-rw-------)
	I1030 19:46:27.498316  446736 main.go:141] libmachine: (no-preload-960512) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:27.498344  446736 main.go:141] libmachine: (no-preload-960512) DBG | About to run SSH command:
	I1030 19:46:27.498355  446736 main.go:141] libmachine: (no-preload-960512) DBG | exit 0
	I1030 19:46:27.626476  446736 main.go:141] libmachine: (no-preload-960512) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:27.626850  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetConfigRaw
	I1030 19:46:27.627519  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:27.629913  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630288  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.630326  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630561  446736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/config.json ...
	I1030 19:46:27.630778  446736 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:27.630801  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:27.631021  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.633457  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.633849  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.633880  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.634032  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.634200  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634393  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.634741  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.634940  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.634952  446736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:27.743135  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:27.743167  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743475  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:46:27.743516  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743717  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.746369  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746726  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.746758  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746928  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.747114  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747266  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747380  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.747509  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.747740  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.747759  446736 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-960512 && echo "no-preload-960512" | sudo tee /etc/hostname
	I1030 19:46:27.872871  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-960512
	
	I1030 19:46:27.872899  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.875533  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.875867  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.875908  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.876072  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.876274  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876546  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876690  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.876851  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.877082  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.877099  446736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-960512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-960512/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-960512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:27.999551  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:27.999617  446736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:27.999654  446736 buildroot.go:174] setting up certificates
	I1030 19:46:27.999667  446736 provision.go:84] configureAuth start
	I1030 19:46:27.999689  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.999998  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.002874  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003285  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.003317  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003474  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.005987  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006376  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.006418  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006545  446736 provision.go:143] copyHostCerts
	I1030 19:46:28.006620  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:28.006639  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:28.006707  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:28.006846  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:28.006859  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:28.006898  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:28.006983  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:28.006993  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:28.007023  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:28.007102  446736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.no-preload-960512 san=[127.0.0.1 192.168.72.132 localhost minikube no-preload-960512]
	I1030 19:46:28.317424  446736 provision.go:177] copyRemoteCerts
	I1030 19:46:28.317502  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:28.317537  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.320089  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320387  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.320419  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.320776  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.320963  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.321116  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.409344  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:46:28.434874  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:28.459903  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:46:28.486949  446736 provision.go:87] duration metric: took 487.261556ms to configureAuth
	I1030 19:46:28.486981  446736 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:28.487219  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:28.487322  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.489873  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490180  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.490223  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490349  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.490561  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490719  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490827  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.491003  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.491199  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.491216  446736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:28.727045  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:28.727081  446736 machine.go:96] duration metric: took 1.096287528s to provisionDockerMachine
	I1030 19:46:28.727095  446736 start.go:293] postStartSetup for "no-preload-960512" (driver="kvm2")
	I1030 19:46:28.727106  446736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:28.727125  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.727460  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:28.727490  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.730071  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730445  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.730479  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730652  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.730858  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.731010  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.731197  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.817529  446736 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:28.822263  446736 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:28.822299  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:28.822394  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:28.822517  446736 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:28.822647  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:28.832488  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:28.858165  446736 start.go:296] duration metric: took 131.055053ms for postStartSetup
	I1030 19:46:28.858211  446736 fix.go:56] duration metric: took 23.84652817s for fixHost
	I1030 19:46:28.858235  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.861136  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861480  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.861513  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861819  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.862059  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862224  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862373  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.862582  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.862786  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.862797  446736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:28.975448  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317588.951806388
	
	I1030 19:46:28.975479  446736 fix.go:216] guest clock: 1730317588.951806388
	I1030 19:46:28.975489  446736 fix.go:229] Guest: 2024-10-30 19:46:28.951806388 +0000 UTC Remote: 2024-10-30 19:46:28.858215114 +0000 UTC m=+358.930371017 (delta=93.591274ms)
	I1030 19:46:28.975521  446736 fix.go:200] guest clock delta is within tolerance: 93.591274ms
	I1030 19:46:28.975529  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 23.963879546s
	I1030 19:46:28.975555  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.975849  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.978813  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979310  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.979341  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979608  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980197  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980429  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980522  446736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:28.980567  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.980682  446736 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:28.980710  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.984058  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984208  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984410  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984582  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984613  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984636  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984782  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.984798  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984966  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.984974  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.985121  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.985187  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.985260  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:29.063734  446736 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:29.087821  446736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:29.236289  446736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:29.242997  446736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:29.243088  446736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:29.260802  446736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:29.260836  446736 start.go:495] detecting cgroup driver to use...
	I1030 19:46:29.260930  446736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:29.279572  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:29.293359  446736 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:29.293423  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:29.306417  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:29.319617  446736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:29.440023  446736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:29.585541  446736 docker.go:233] disabling docker service ...
	I1030 19:46:29.585630  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:29.600459  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:29.613611  446736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:29.752666  446736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:29.880152  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:29.893912  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:29.913099  446736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:46:29.913160  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.923800  446736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:29.923882  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.934880  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.946088  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.956644  446736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:29.967199  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.978863  446736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.996225  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:30.006604  446736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:30.015954  446736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:30.016017  446736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:30.029194  446736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:30.041316  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:30.161438  446736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:30.257137  446736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:30.257209  446736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:30.261981  446736 start.go:563] Will wait 60s for crictl version
	I1030 19:46:30.262052  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.266275  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:30.305128  446736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:30.305228  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.335445  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.367026  446736 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:46:27.143768  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.644294  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.143819  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.643783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.144405  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.643941  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.644787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.143873  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.643857  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.982162  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:32.480878  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:28.939126  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.939780  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.368355  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:30.371260  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371651  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:30.371679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371922  446736 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:30.376282  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:30.389078  446736 kubeadm.go:883] updating cluster {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:30.389193  446736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:46:30.389228  446736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:30.423375  446736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:46:30.423402  446736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:30.423508  446736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.423562  446736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.423578  446736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.423595  446736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.423536  446736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.423634  446736 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424979  446736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.424988  446736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.424996  446736 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424987  446736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.425021  446736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.425036  446736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.425029  446736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.425061  446736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.612665  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.618602  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1030 19:46:30.636563  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.680808  446736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1030 19:46:30.680858  446736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.680911  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.749318  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.750405  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.751514  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.752746  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.768614  446736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1030 19:46:30.768663  446736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.768714  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.768723  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.881778  446736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1030 19:46:30.881811  446736 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1030 19:46:30.881821  446736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.881844  446736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.881862  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.881883  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.884827  446736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1030 19:46:30.884861  446736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.884901  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891812  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.891882  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.891907  446736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1030 19:46:30.891940  446736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.891981  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891986  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.892142  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.893781  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.992346  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.992372  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.992404  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.995602  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.995730  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.995786  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.123892  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1030 19:46:31.123996  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:31.124018  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.132177  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.132209  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:31.132311  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:31.132335  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.220011  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1030 19:46:31.220043  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220100  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220224  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1030 19:46:31.220329  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:31.262583  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1030 19:46:31.262685  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.262698  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:31.269015  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1030 19:46:31.269117  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:31.269710  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1030 19:46:31.269793  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:32.667341  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.216743  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.99661544s)
	I1030 19:46:33.216787  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1030 19:46:33.216787  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.996433716s)
	I1030 19:46:33.216820  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1030 19:46:33.216829  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216840  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.95412356s)
	I1030 19:46:33.216872  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1030 19:46:33.216884  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216925  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2: (1.954216284s)
	I1030 19:46:33.216964  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1030 19:46:33.216989  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.947854262s)
	I1030 19:46:33.217014  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1030 19:46:33.217027  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.947220506s)
	I1030 19:46:33.217040  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1030 19:46:33.217059  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:33.217140  446736 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1030 19:46:33.217178  446736 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.217222  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:32.144229  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.644079  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.643950  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.143888  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.643861  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.144210  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.644677  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.644549  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.481488  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:36.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:33.438659  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:37.440028  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.577178  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.360267806s)
	I1030 19:46:35.577219  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1030 19:46:35.577227  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.360144583s)
	I1030 19:46:35.577243  446736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.577252  446736 ssh_runner.go:235] Completed: which crictl: (2.360017291s)
	I1030 19:46:35.577259  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1030 19:46:35.577305  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:35.577309  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.615490  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492071  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.914649003s)
	I1030 19:46:39.492116  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1030 19:46:39.492142  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.876615301s)
	I1030 19:46:39.492211  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492148  446736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.492295  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.535258  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 19:46:39.535417  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:37.144681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.643833  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.143783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.644359  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.144745  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.644625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.144535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.643881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.144754  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.644070  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.302627  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.480981  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:39.940272  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:42.439827  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.566095  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.073767908s)
	I1030 19:46:41.566140  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1030 19:46:41.566167  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566169  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.030723752s)
	I1030 19:46:41.566210  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566224  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1030 19:46:43.628473  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.06223599s)
	I1030 19:46:43.628500  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1030 19:46:43.628525  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:43.628570  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:42.144672  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.644533  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.144320  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.644574  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.144465  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.644428  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.143785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.643767  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.144467  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.644496  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.481495  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.481844  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.982318  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:44.940061  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.439131  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.079808  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451207821s)
	I1030 19:46:45.079843  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1030 19:46:45.079870  446736 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:45.079918  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:46.026472  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 19:46:46.026538  446736 cache_images.go:123] Successfully loaded all cached images
	I1030 19:46:46.026547  446736 cache_images.go:92] duration metric: took 15.603128567s to LoadCachedImages
	I1030 19:46:46.026562  446736 kubeadm.go:934] updating node { 192.168.72.132 8443 v1.31.2 crio true true} ...
	I1030 19:46:46.026722  446736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-960512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:46.026819  446736 ssh_runner.go:195] Run: crio config
	I1030 19:46:46.080342  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:46.080367  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:46.080376  446736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:46.080399  446736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-960512 NodeName:no-preload-960512 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:46:46.080574  446736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-960512"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:46.080645  446736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:46:46.091323  446736 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:46.091400  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:46.100320  446736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1030 19:46:46.117369  446736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:46.133667  446736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1030 19:46:46.157251  446736 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:46.161543  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:46.173451  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:46.303532  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:46.321855  446736 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512 for IP: 192.168.72.132
	I1030 19:46:46.321883  446736 certs.go:194] generating shared ca certs ...
	I1030 19:46:46.321905  446736 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:46.322108  446736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:46.322171  446736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:46.322189  446736 certs.go:256] generating profile certs ...
	I1030 19:46:46.322294  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/client.key
	I1030 19:46:46.322381  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key.378d6029
	I1030 19:46:46.322436  446736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key
	I1030 19:46:46.322609  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:46.322649  446736 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:46.322661  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:46.322692  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:46.322727  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:46.322756  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:46.322812  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:46.323679  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:46.362339  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:46.396270  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:46.443482  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:46.468142  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:46:46.507418  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:46.534091  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:46.557105  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:46:46.579880  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:46.602665  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:46.625853  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:46.651685  446736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:46.670898  446736 ssh_runner.go:195] Run: openssl version
	I1030 19:46:46.677083  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:46.688814  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693349  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693399  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.699221  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:46.710200  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:46.721001  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725283  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725343  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.730798  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:46.741915  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:46.752767  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757109  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757150  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.762844  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:46.773796  446736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:46.778156  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:46.784099  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:46.789960  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:46.796056  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:46.801880  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:46.807680  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:46.813574  446736 kubeadm.go:392] StartCluster: {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:46.813694  446736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:46.813735  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.856225  446736 cri.go:89] found id: ""
	I1030 19:46:46.856309  446736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:46.866696  446736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:46.866721  446736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:46.866774  446736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:46.876622  446736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:46.877777  446736 kubeconfig.go:125] found "no-preload-960512" server: "https://192.168.72.132:8443"
	I1030 19:46:46.880116  446736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:46.889710  446736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.132
	I1030 19:46:46.889743  446736 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:46.889761  446736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:46.889837  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.927109  446736 cri.go:89] found id: ""
	I1030 19:46:46.927177  446736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:46.944519  446736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:46.954607  446736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:46.954626  446736 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:46.954669  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:46.963987  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:46.964086  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:46.973787  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:46.983447  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:46.983496  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:46.993101  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.003713  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:47.003773  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.013162  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:47.022411  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:47.022479  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:47.031878  446736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:47.041616  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:47.156846  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.637250  446736 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.480364831s)
	I1030 19:46:48.637284  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.836676  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.908664  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.987298  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:48.987411  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.488330  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.143932  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.644228  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.144124  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.643923  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.144466  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.643968  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.144811  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.643785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.144372  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.644019  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.983127  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.482250  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.939257  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.439840  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.988463  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.024092  446736 api_server.go:72] duration metric: took 1.036791371s to wait for apiserver process to appear ...
	I1030 19:46:50.024127  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:46:50.024155  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:50.024711  446736 api_server.go:269] stopped: https://192.168.72.132:8443/healthz: Get "https://192.168.72.132:8443/healthz": dial tcp 192.168.72.132:8443: connect: connection refused
	I1030 19:46:50.524543  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.757497  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:46:52.757537  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:46:52.757558  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.847598  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:52.847638  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.024885  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.030717  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.030749  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.524384  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.531420  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.531459  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.025006  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.030512  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.030545  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.525157  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.529426  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.529453  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.025276  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.029608  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.029634  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.525041  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.529303  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.529339  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:56.024906  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:56.029520  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:46:56.035579  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:46:56.035609  446736 api_server.go:131] duration metric: took 6.011468992s to wait for apiserver health ...
	I1030 19:46:56.035619  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:56.035625  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:56.037524  446736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:46:52.144732  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.644528  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.144074  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.643889  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.143976  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.644535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.144783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.644114  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.144728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.643846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.038963  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:46:56.050330  446736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:46:56.069509  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:46:56.079237  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:46:56.079268  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:46:56.079275  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:46:56.079283  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:46:56.079288  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:46:56.079294  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:46:56.079299  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:46:56.079304  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:46:56.079307  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:46:56.079313  446736 system_pods.go:74] duration metric: took 9.785027ms to wait for pod list to return data ...
	I1030 19:46:56.079327  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:46:56.082617  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:46:56.082644  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:46:56.082658  446736 node_conditions.go:105] duration metric: took 3.325744ms to run NodePressure ...
	I1030 19:46:56.082680  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:56.353123  446736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357714  446736 kubeadm.go:739] kubelet initialised
	I1030 19:46:56.357740  446736 kubeadm.go:740] duration metric: took 4.581883ms waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357755  446736 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:56.362687  446736 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.367124  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367153  446736 pod_ready.go:82] duration metric: took 4.443081ms for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.367165  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367180  446736 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.371747  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371774  446736 pod_ready.go:82] duration metric: took 4.580967ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.371785  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371794  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.375687  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375704  446736 pod_ready.go:82] duration metric: took 3.901023ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.375712  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375718  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.472995  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473036  446736 pod_ready.go:82] duration metric: took 97.300344ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.473047  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473056  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.873717  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873749  446736 pod_ready.go:82] duration metric: took 400.680615ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.873759  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873765  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.273361  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273392  446736 pod_ready.go:82] duration metric: took 399.61983ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.273405  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273415  446736 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.674201  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674236  446736 pod_ready.go:82] duration metric: took 400.809663ms for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.674251  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674260  446736 pod_ready.go:39] duration metric: took 1.31649331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:57.674285  446736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:46:57.687464  446736 ops.go:34] apiserver oom_adj: -16
	I1030 19:46:57.687489  446736 kubeadm.go:597] duration metric: took 10.820761471s to restartPrimaryControlPlane
	I1030 19:46:57.687498  446736 kubeadm.go:394] duration metric: took 10.873934509s to StartCluster
	I1030 19:46:57.687514  446736 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.687586  446736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:57.689255  446736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.689496  446736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:46:57.689574  446736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:46:57.689683  446736 addons.go:69] Setting storage-provisioner=true in profile "no-preload-960512"
	I1030 19:46:57.689706  446736 addons.go:234] Setting addon storage-provisioner=true in "no-preload-960512"
	I1030 19:46:57.689708  446736 addons.go:69] Setting metrics-server=true in profile "no-preload-960512"
	W1030 19:46:57.689719  446736 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:46:57.689727  446736 addons.go:234] Setting addon metrics-server=true in "no-preload-960512"
	W1030 19:46:57.689737  446736 addons.go:243] addon metrics-server should already be in state true
	I1030 19:46:57.689755  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689791  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:57.689761  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689707  446736 addons.go:69] Setting default-storageclass=true in profile "no-preload-960512"
	I1030 19:46:57.689912  446736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-960512"
	I1030 19:46:57.690245  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690258  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690264  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690297  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690303  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690322  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.691365  446736 out.go:177] * Verifying Kubernetes components...
	I1030 19:46:57.692941  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:57.727794  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1030 19:46:57.727877  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I1030 19:46:57.728127  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1030 19:46:57.728276  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728414  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728517  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728861  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.728879  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729032  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729053  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729056  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729064  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729350  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729429  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729452  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.730008  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730051  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.730124  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730362  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.731104  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.734295  446736 addons.go:234] Setting addon default-storageclass=true in "no-preload-960512"
	W1030 19:46:57.734316  446736 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:46:57.734349  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.734742  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.734810  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.747185  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1030 19:46:57.747680  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.748340  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.748360  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.748795  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.749029  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.749722  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I1030 19:46:57.750318  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.754616  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I1030 19:46:57.754666  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.755024  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.755052  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.755555  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.755672  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757159  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.757166  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.757184  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.757504  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757804  446736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:57.758045  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.758089  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.759001  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.759300  446736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:57.759313  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:46:57.759327  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.762134  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762557  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.762582  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762740  446736 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:46:54.485910  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.981415  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:54.939168  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.940263  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:57.762828  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.763037  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.763192  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.763344  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.763936  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:46:57.763953  446736 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:46:57.763970  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.766410  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.766771  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.766795  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.767034  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.767212  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.767385  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.767522  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.776037  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1030 19:46:57.776386  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.776846  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.776864  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.777184  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.777339  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.778829  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.779118  446736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:57.779138  446736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:46:57.779156  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.781325  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781590  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.781615  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781755  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.781895  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.781995  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.782088  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.895549  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:57.913030  446736 node_ready.go:35] waiting up to 6m0s for node "no-preload-960512" to be "Ready" ...
	I1030 19:46:58.008228  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:58.009206  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:46:58.009222  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:46:58.034347  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:58.036620  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:46:58.036646  446736 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:46:58.140489  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:58.140522  446736 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:46:58.181145  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:59.403246  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.368855241s)
	I1030 19:46:59.403317  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395049308s)
	I1030 19:46:59.403331  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403340  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403356  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403369  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403657  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403673  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403681  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403688  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403766  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403770  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.403778  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403790  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403796  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403939  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403954  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404023  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.404059  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404071  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411114  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.411136  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.411365  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411421  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.411437  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513065  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33186887s)
	I1030 19:46:59.513150  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513168  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513455  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513481  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513486  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513491  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513537  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513769  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513797  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513809  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513826  446736 addons.go:475] Verifying addon metrics-server=true in "no-preload-960512"
	I1030 19:46:59.516354  446736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:46:59.517886  446736 addons.go:510] duration metric: took 1.828322965s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:46:59.916839  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.143829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.644245  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.144327  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.644684  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.644799  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.144222  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.644111  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.144268  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.644631  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.982694  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:00.984014  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:59.439638  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:01.939460  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:02.416750  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:47:03.416443  446736 node_ready.go:49] node "no-preload-960512" has status "Ready":"True"
	I1030 19:47:03.416469  446736 node_ready.go:38] duration metric: took 5.503404181s for node "no-preload-960512" to be "Ready" ...
	I1030 19:47:03.416479  446736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:47:03.422219  446736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:02.143881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.644208  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.144411  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.643948  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.644179  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.144791  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.643983  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.143859  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.644436  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.481239  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.481271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.482108  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:04.439288  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:06.439454  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.428589  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.430975  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:09.928214  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.144765  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.644280  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.144381  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.644099  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.144129  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.643864  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.144105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.643752  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.144135  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.644172  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.982150  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.481265  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:08.939357  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.940087  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.430572  446736 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.430598  446736 pod_ready.go:82] duration metric: took 7.008352985s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.430610  446736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436673  446736 pod_ready.go:93] pod "etcd-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.436699  446736 pod_ready.go:82] duration metric: took 6.082545ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436711  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442262  446736 pod_ready.go:93] pod "kube-apiserver-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.442282  446736 pod_ready.go:82] duration metric: took 5.563816ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442292  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446170  446736 pod_ready.go:93] pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.446189  446736 pod_ready.go:82] duration metric: took 3.890123ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446198  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450190  446736 pod_ready.go:93] pod "kube-proxy-fxqqc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.450216  446736 pod_ready.go:82] duration metric: took 4.011125ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450226  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826537  446736 pod_ready.go:93] pod "kube-scheduler-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.826572  446736 pod_ready.go:82] duration metric: took 376.338504ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826587  446736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:12.834756  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.144391  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.644441  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.143916  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.644779  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.644634  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.144050  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.644738  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:16.143957  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:16.144037  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:16.184282  447486 cri.go:89] found id: ""
	I1030 19:47:16.184310  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.184320  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:16.184327  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:16.184403  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:16.225359  447486 cri.go:89] found id: ""
	I1030 19:47:16.225388  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.225397  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:16.225403  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:16.225471  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:16.260591  447486 cri.go:89] found id: ""
	I1030 19:47:16.260625  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.260635  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:16.260641  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:16.260695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:16.299562  447486 cri.go:89] found id: ""
	I1030 19:47:16.299591  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.299602  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:16.299609  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:16.299682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:16.334753  447486 cri.go:89] found id: ""
	I1030 19:47:16.334781  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.334789  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:16.334795  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:16.334877  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:16.371588  447486 cri.go:89] found id: ""
	I1030 19:47:16.371619  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.371628  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:16.371634  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:16.371689  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:16.406668  447486 cri.go:89] found id: ""
	I1030 19:47:16.406699  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.406710  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:16.406718  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:16.406786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:16.443050  447486 cri.go:89] found id: ""
	I1030 19:47:16.443081  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.443096  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:16.443109  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:16.443125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:16.492898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:16.492936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:16.506310  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:16.506343  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:16.637629  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:16.637660  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:16.637677  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:16.709581  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:16.709621  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:14.481660  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:16.981807  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:13.438777  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.439457  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.939606  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.335280  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.833216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.833320  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.253501  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:19.267200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:19.267276  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:19.303608  447486 cri.go:89] found id: ""
	I1030 19:47:19.303641  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.303651  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:19.303658  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:19.303711  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:19.341311  447486 cri.go:89] found id: ""
	I1030 19:47:19.341343  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.341354  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:19.341363  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:19.341427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:19.376949  447486 cri.go:89] found id: ""
	I1030 19:47:19.376977  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.376987  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:19.376996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:19.377075  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:19.414164  447486 cri.go:89] found id: ""
	I1030 19:47:19.414197  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.414209  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:19.414218  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:19.414308  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:19.450637  447486 cri.go:89] found id: ""
	I1030 19:47:19.450671  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.450683  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:19.450692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:19.450761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:19.485315  447486 cri.go:89] found id: ""
	I1030 19:47:19.485345  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.485355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:19.485364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:19.485427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:19.519873  447486 cri.go:89] found id: ""
	I1030 19:47:19.519901  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.519911  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:19.519919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:19.519982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:19.555168  447486 cri.go:89] found id: ""
	I1030 19:47:19.555198  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.555211  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:19.555223  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:19.555239  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:19.607227  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:19.607265  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:19.621465  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:19.621498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:19.700837  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:19.700869  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:19.700882  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:19.774428  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:19.774468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:18.982345  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:21.482165  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.940122  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.439405  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.333449  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.833942  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.314410  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:22.327998  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:22.328083  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:22.365583  447486 cri.go:89] found id: ""
	I1030 19:47:22.365611  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.365622  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:22.365631  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:22.365694  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:22.398964  447486 cri.go:89] found id: ""
	I1030 19:47:22.398996  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.399008  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:22.399016  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:22.399092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:22.435132  447486 cri.go:89] found id: ""
	I1030 19:47:22.435169  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.435181  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:22.435188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:22.435252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:22.471510  447486 cri.go:89] found id: ""
	I1030 19:47:22.471544  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.471557  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:22.471574  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:22.471630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:22.509611  447486 cri.go:89] found id: ""
	I1030 19:47:22.509639  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.509647  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:22.509653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:22.509707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:22.546502  447486 cri.go:89] found id: ""
	I1030 19:47:22.546539  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.546552  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:22.546560  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:22.546630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:22.584560  447486 cri.go:89] found id: ""
	I1030 19:47:22.584593  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.584605  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:22.584613  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:22.584676  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:22.621421  447486 cri.go:89] found id: ""
	I1030 19:47:22.621461  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.621474  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:22.621486  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:22.621505  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:22.634998  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:22.635038  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:22.711002  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:22.711028  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:22.711047  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:22.790673  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:22.790712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.831804  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:22.831851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.386915  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:25.399854  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:25.399954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:25.438346  447486 cri.go:89] found id: ""
	I1030 19:47:25.438381  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.438406  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:25.438416  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:25.438500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:25.474888  447486 cri.go:89] found id: ""
	I1030 19:47:25.474915  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.474924  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:25.474931  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:25.474994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:25.511925  447486 cri.go:89] found id: ""
	I1030 19:47:25.511955  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.511966  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:25.511973  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:25.512038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:25.551027  447486 cri.go:89] found id: ""
	I1030 19:47:25.551058  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.551067  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:25.551073  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:25.551144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:25.584736  447486 cri.go:89] found id: ""
	I1030 19:47:25.584764  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.584773  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:25.584779  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:25.584847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:25.632765  447486 cri.go:89] found id: ""
	I1030 19:47:25.632798  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.632810  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:25.632818  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:25.632893  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:25.682501  447486 cri.go:89] found id: ""
	I1030 19:47:25.682528  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.682536  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:25.682543  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:25.682591  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:25.728306  447486 cri.go:89] found id: ""
	I1030 19:47:25.728340  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.728352  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:25.728365  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:25.728397  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.781908  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:25.781944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:25.795864  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:25.795899  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:25.868350  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:25.868378  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:25.868392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:25.944244  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:25.944277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:23.981016  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:25.982186  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.942113  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.438568  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.333623  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.334460  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:28.488216  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:28.501481  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:28.501558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:28.536808  447486 cri.go:89] found id: ""
	I1030 19:47:28.536838  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.536849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:28.536857  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:28.536923  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:28.571819  447486 cri.go:89] found id: ""
	I1030 19:47:28.571855  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.571867  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:28.571885  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:28.571966  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:28.605532  447486 cri.go:89] found id: ""
	I1030 19:47:28.605571  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.605582  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:28.605610  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:28.605682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:28.642108  447486 cri.go:89] found id: ""
	I1030 19:47:28.642140  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.642152  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:28.642159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:28.642234  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:28.680036  447486 cri.go:89] found id: ""
	I1030 19:47:28.680065  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.680078  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:28.680086  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:28.680151  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.716135  447486 cri.go:89] found id: ""
	I1030 19:47:28.716162  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.716171  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:28.716177  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:28.716238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:28.752364  447486 cri.go:89] found id: ""
	I1030 19:47:28.752398  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.752406  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:28.752413  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:28.752478  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:28.788396  447486 cri.go:89] found id: ""
	I1030 19:47:28.788434  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.788447  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:28.788461  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:28.788476  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:28.841560  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:28.841595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:28.856134  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:28.856178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:28.930463  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:28.930507  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:28.930527  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:29.013743  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:29.013795  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:31.557942  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:31.573562  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:31.573654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:31.625349  447486 cri.go:89] found id: ""
	I1030 19:47:31.625378  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.625386  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:31.625392  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:31.625442  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:31.689536  447486 cri.go:89] found id: ""
	I1030 19:47:31.689566  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.689574  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:31.689581  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:31.689632  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:31.723758  447486 cri.go:89] found id: ""
	I1030 19:47:31.723794  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.723806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:31.723814  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:31.723890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:31.762671  447486 cri.go:89] found id: ""
	I1030 19:47:31.762698  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.762707  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:31.762713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:31.762761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:31.797658  447486 cri.go:89] found id: ""
	I1030 19:47:31.797686  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.797694  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:31.797702  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:31.797792  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.481158  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:30.981477  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:32.981593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.940019  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.833540  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.334678  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.832186  447486 cri.go:89] found id: ""
	I1030 19:47:31.832217  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.832228  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:31.832236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:31.832298  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:31.866820  447486 cri.go:89] found id: ""
	I1030 19:47:31.866853  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.866866  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:31.866875  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:31.866937  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:31.901888  447486 cri.go:89] found id: ""
	I1030 19:47:31.901913  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.901922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:31.901932  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:31.901944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:31.992343  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:31.992380  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:32.030519  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:32.030559  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:32.084442  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:32.084478  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:32.098919  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:32.098954  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:32.171034  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:34.671243  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:34.685879  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:34.685972  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:34.720657  447486 cri.go:89] found id: ""
	I1030 19:47:34.720686  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.720694  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:34.720700  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:34.720757  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:34.759571  447486 cri.go:89] found id: ""
	I1030 19:47:34.759602  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.759615  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:34.759624  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:34.759685  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:34.795273  447486 cri.go:89] found id: ""
	I1030 19:47:34.795313  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.795322  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:34.795329  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:34.795450  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:34.828999  447486 cri.go:89] found id: ""
	I1030 19:47:34.829035  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.829047  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:34.829054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:34.829158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:34.865620  447486 cri.go:89] found id: ""
	I1030 19:47:34.865661  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.865674  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:34.865682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:34.865753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:34.900768  447486 cri.go:89] found id: ""
	I1030 19:47:34.900801  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.900812  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:34.900820  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:34.900889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:34.945023  447486 cri.go:89] found id: ""
	I1030 19:47:34.945048  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.945057  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:34.945063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:34.945118  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:34.980458  447486 cri.go:89] found id: ""
	I1030 19:47:34.980483  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.980492  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:34.980501  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:34.980514  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:35.052570  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:35.052597  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:35.052613  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:35.133825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:35.133869  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:35.176016  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:35.176063  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:35.228866  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:35.228903  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:34.982702  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.481103  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.438712  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.938856  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.837275  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:39.332612  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.743408  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:37.757472  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:37.757547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:37.794818  447486 cri.go:89] found id: ""
	I1030 19:47:37.794847  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.794856  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:37.794862  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:37.794928  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:37.830025  447486 cri.go:89] found id: ""
	I1030 19:47:37.830064  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.830077  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:37.830086  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:37.830150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:37.864862  447486 cri.go:89] found id: ""
	I1030 19:47:37.864893  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.864902  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:37.864908  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:37.864958  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:37.901650  447486 cri.go:89] found id: ""
	I1030 19:47:37.901699  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.901713  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:37.901722  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:37.901780  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:37.935824  447486 cri.go:89] found id: ""
	I1030 19:47:37.935854  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.935862  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:37.935868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:37.935930  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:37.972774  447486 cri.go:89] found id: ""
	I1030 19:47:37.972805  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.972813  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:37.972819  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:37.972868  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:38.007815  447486 cri.go:89] found id: ""
	I1030 19:47:38.007845  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.007856  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:38.007864  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:38.007931  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:38.042525  447486 cri.go:89] found id: ""
	I1030 19:47:38.042559  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.042571  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:38.042584  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:38.042600  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:38.122022  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:38.122048  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:38.122065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:38.200534  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:38.200575  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:38.240118  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:38.240154  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:38.291936  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:38.291976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:40.806105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:40.821268  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:40.821343  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:40.857151  447486 cri.go:89] found id: ""
	I1030 19:47:40.857186  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.857198  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:40.857207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:40.857266  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:40.893603  447486 cri.go:89] found id: ""
	I1030 19:47:40.893639  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.893648  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:40.893654  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:40.893720  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:40.935294  447486 cri.go:89] found id: ""
	I1030 19:47:40.935330  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.935342  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:40.935349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:40.935418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:40.971509  447486 cri.go:89] found id: ""
	I1030 19:47:40.971536  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.971544  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:40.971550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:40.971610  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:41.009895  447486 cri.go:89] found id: ""
	I1030 19:47:41.009932  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.009941  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:41.009948  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:41.010008  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:41.045170  447486 cri.go:89] found id: ""
	I1030 19:47:41.045208  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.045221  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:41.045229  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:41.045288  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:41.077654  447486 cri.go:89] found id: ""
	I1030 19:47:41.077684  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.077695  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:41.077704  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:41.077771  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:41.111509  447486 cri.go:89] found id: ""
	I1030 19:47:41.111543  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.111552  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:41.111562  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:41.111574  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:41.164939  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:41.164976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:41.178512  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:41.178589  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:41.258783  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:41.258813  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:41.258832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:41.338192  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:41.338230  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:39.481210  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.481439  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:38.938987  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:40.941386  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.333705  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.833502  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.878155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:43.892376  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:43.892452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:43.930556  447486 cri.go:89] found id: ""
	I1030 19:47:43.930594  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.930606  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:43.930614  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:43.930679  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:43.970588  447486 cri.go:89] found id: ""
	I1030 19:47:43.970619  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.970630  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:43.970638  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:43.970706  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:44.005467  447486 cri.go:89] found id: ""
	I1030 19:47:44.005497  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.005506  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:44.005512  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:44.005573  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:44.039126  447486 cri.go:89] found id: ""
	I1030 19:47:44.039164  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.039173  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:44.039179  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:44.039239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:44.072961  447486 cri.go:89] found id: ""
	I1030 19:47:44.072994  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.073006  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:44.073014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:44.073109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:44.105864  447486 cri.go:89] found id: ""
	I1030 19:47:44.105891  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.105900  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:44.105907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:44.105956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:44.138198  447486 cri.go:89] found id: ""
	I1030 19:47:44.138240  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.138250  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:44.138264  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:44.138331  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:44.172529  447486 cri.go:89] found id: ""
	I1030 19:47:44.172558  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.172567  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:44.172577  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:44.172594  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:44.248215  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:44.248254  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:44.286169  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:44.286202  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:44.341129  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:44.341167  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:44.354570  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:44.354597  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:44.427790  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:43.481483  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.482271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.981312  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.440759  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.938783  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.940512  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.332448  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:48.333216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.928728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:46.943068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:46.943154  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:46.978385  447486 cri.go:89] found id: ""
	I1030 19:47:46.978416  447486 logs.go:282] 0 containers: []
	W1030 19:47:46.978428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:46.978436  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:46.978522  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:47.020413  447486 cri.go:89] found id: ""
	I1030 19:47:47.020457  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.020469  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:47.020476  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:47.020547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:47.061492  447486 cri.go:89] found id: ""
	I1030 19:47:47.061526  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.061538  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:47.061547  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:47.061611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:47.097621  447486 cri.go:89] found id: ""
	I1030 19:47:47.097659  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.097670  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:47.097679  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:47.097739  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:47.131740  447486 cri.go:89] found id: ""
	I1030 19:47:47.131769  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.131779  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:47.131785  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:47.131856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:47.167623  447486 cri.go:89] found id: ""
	I1030 19:47:47.167661  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.167674  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:47.167682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:47.167746  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:47.202299  447486 cri.go:89] found id: ""
	I1030 19:47:47.202328  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.202337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:47.202344  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:47.202401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:47.236652  447486 cri.go:89] found id: ""
	I1030 19:47:47.236686  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.236695  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:47.236704  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:47.236716  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:47.289700  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:47.289740  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:47.304929  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:47.304964  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:47.374811  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:47.374842  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:47.374858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:47.449161  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:47.449196  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:49.989730  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:50.002741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:50.002821  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:50.037602  447486 cri.go:89] found id: ""
	I1030 19:47:50.037636  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.037647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:50.037655  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:50.037724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:50.071346  447486 cri.go:89] found id: ""
	I1030 19:47:50.071383  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.071395  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:50.071405  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:50.071473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:50.106657  447486 cri.go:89] found id: ""
	I1030 19:47:50.106698  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.106711  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:50.106719  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:50.106783  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:50.140974  447486 cri.go:89] found id: ""
	I1030 19:47:50.141012  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.141025  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:50.141032  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:50.141105  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:50.177715  447486 cri.go:89] found id: ""
	I1030 19:47:50.177748  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.177756  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:50.177763  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:50.177824  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:50.212234  447486 cri.go:89] found id: ""
	I1030 19:47:50.212263  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.212272  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:50.212278  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:50.212337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:50.250791  447486 cri.go:89] found id: ""
	I1030 19:47:50.250826  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.250835  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:50.250842  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:50.250908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:50.288575  447486 cri.go:89] found id: ""
	I1030 19:47:50.288607  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.288615  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:50.288628  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:50.288643  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:50.358015  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:50.358039  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:50.358054  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:50.433194  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:50.433235  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:50.473485  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:50.473519  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:50.523581  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:50.523618  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:49.981614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:51.982079  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.439717  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.940170  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.333498  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.832848  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:54.833689  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:53.038393  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:53.052835  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:53.052910  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:53.088797  447486 cri.go:89] found id: ""
	I1030 19:47:53.088828  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.088837  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:53.088843  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:53.088897  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:53.124627  447486 cri.go:89] found id: ""
	I1030 19:47:53.124659  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.124668  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:53.124674  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:53.124724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:53.159127  447486 cri.go:89] found id: ""
	I1030 19:47:53.159163  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.159175  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:53.159183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:53.159244  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:53.191770  447486 cri.go:89] found id: ""
	I1030 19:47:53.191801  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.191810  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:53.191817  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:53.191885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:53.227727  447486 cri.go:89] found id: ""
	I1030 19:47:53.227761  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.227774  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:53.227781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:53.227842  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:53.262937  447486 cri.go:89] found id: ""
	I1030 19:47:53.262969  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.262981  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:53.262989  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:53.263060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:53.296070  447486 cri.go:89] found id: ""
	I1030 19:47:53.296113  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.296124  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:53.296133  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:53.296197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:53.332628  447486 cri.go:89] found id: ""
	I1030 19:47:53.332663  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.332674  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:53.332687  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:53.332702  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:53.385004  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:53.385046  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.400139  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:53.400185  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:53.477792  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:53.477826  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:53.477858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:53.553145  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:53.553186  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:56.094454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:56.107827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:56.107900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:56.141701  447486 cri.go:89] found id: ""
	I1030 19:47:56.141739  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.141751  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:56.141763  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:56.141831  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:56.179973  447486 cri.go:89] found id: ""
	I1030 19:47:56.180003  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.180016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:56.180023  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:56.180099  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:56.220456  447486 cri.go:89] found id: ""
	I1030 19:47:56.220486  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.220496  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:56.220503  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:56.220578  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:56.259699  447486 cri.go:89] found id: ""
	I1030 19:47:56.259727  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.259736  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:56.259741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:56.259791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:56.302726  447486 cri.go:89] found id: ""
	I1030 19:47:56.302762  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.302775  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:56.302783  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:56.302850  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:56.339791  447486 cri.go:89] found id: ""
	I1030 19:47:56.339819  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.339828  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:56.339834  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:56.339889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:56.381291  447486 cri.go:89] found id: ""
	I1030 19:47:56.381325  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.381337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:56.381345  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:56.381401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:56.417150  447486 cri.go:89] found id: ""
	I1030 19:47:56.417182  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.417194  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:56.417207  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:56.417227  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:56.466963  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:56.467005  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:56.481528  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:56.481557  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:56.554843  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:56.554872  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:56.554887  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:56.635798  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:56.635846  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:54.480601  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:56.481475  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:55.439618  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.940438  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.337314  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.179829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:59.193083  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:59.193160  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:59.231253  447486 cri.go:89] found id: ""
	I1030 19:47:59.231288  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.231302  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:59.231311  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:59.231382  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:59.265982  447486 cri.go:89] found id: ""
	I1030 19:47:59.266013  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.266022  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:59.266028  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:59.266090  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:59.303724  447486 cri.go:89] found id: ""
	I1030 19:47:59.303761  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.303773  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:59.303781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:59.303848  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:59.342137  447486 cri.go:89] found id: ""
	I1030 19:47:59.342163  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.342172  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:59.342180  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:59.342246  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:59.382652  447486 cri.go:89] found id: ""
	I1030 19:47:59.382684  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.382693  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:59.382700  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:59.382761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:59.422428  447486 cri.go:89] found id: ""
	I1030 19:47:59.422454  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.422463  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:59.422469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:59.422539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:59.464047  447486 cri.go:89] found id: ""
	I1030 19:47:59.464079  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.464089  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:59.464095  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:59.464146  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:59.500658  447486 cri.go:89] found id: ""
	I1030 19:47:59.500693  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.500705  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:59.500716  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:59.500732  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:59.554634  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:59.554679  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:59.567956  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:59.567986  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:59.646305  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:59.646332  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:59.646349  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:59.730008  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:59.730052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:58.486516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.982184  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.439220  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.439945  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:01.832883  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:03.834027  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.274141  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:02.287246  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:02.287320  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:02.322166  447486 cri.go:89] found id: ""
	I1030 19:48:02.322320  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.322336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:02.322346  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:02.322421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:02.358101  447486 cri.go:89] found id: ""
	I1030 19:48:02.358131  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.358140  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:02.358146  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:02.358209  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:02.394812  447486 cri.go:89] found id: ""
	I1030 19:48:02.394898  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.394915  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:02.394924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:02.394990  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:02.429128  447486 cri.go:89] found id: ""
	I1030 19:48:02.429165  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.429177  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:02.429186  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:02.429358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:02.465878  447486 cri.go:89] found id: ""
	I1030 19:48:02.465907  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.465915  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:02.465921  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:02.465973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:02.502758  447486 cri.go:89] found id: ""
	I1030 19:48:02.502794  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.502805  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:02.502813  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:02.502879  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:02.540111  447486 cri.go:89] found id: ""
	I1030 19:48:02.540142  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.540152  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:02.540158  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:02.540222  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:02.574728  447486 cri.go:89] found id: ""
	I1030 19:48:02.574762  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.574774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:02.574787  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:02.574804  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.613333  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:02.613374  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:02.664970  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:02.665013  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:02.679594  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:02.679626  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:02.744184  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:02.744208  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:02.744222  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.326826  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:05.340166  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:05.340232  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:05.376742  447486 cri.go:89] found id: ""
	I1030 19:48:05.376774  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.376789  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:05.376795  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:05.376865  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:05.413981  447486 cri.go:89] found id: ""
	I1030 19:48:05.414026  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.414039  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:05.414047  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:05.414121  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:05.449811  447486 cri.go:89] found id: ""
	I1030 19:48:05.449842  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.449854  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:05.449862  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:05.449925  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:05.502576  447486 cri.go:89] found id: ""
	I1030 19:48:05.502610  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.502622  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:05.502630  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:05.502721  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:05.536747  447486 cri.go:89] found id: ""
	I1030 19:48:05.536778  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.536787  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:05.536793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:05.536857  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:05.570308  447486 cri.go:89] found id: ""
	I1030 19:48:05.570335  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.570344  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:05.570353  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:05.570420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:05.605006  447486 cri.go:89] found id: ""
	I1030 19:48:05.605037  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.605048  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:05.605054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:05.605109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:05.638651  447486 cri.go:89] found id: ""
	I1030 19:48:05.638681  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.638693  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:05.638705  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:05.638720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:05.690734  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:05.690769  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:05.704561  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:05.704588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:05.779426  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:05.779448  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:05.779471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.866320  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:05.866355  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:03.481614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:05.482428  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.981875  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:04.939485  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.438925  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:06.334094  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.834525  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.409454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:08.423687  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:08.423767  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:08.463554  447486 cri.go:89] found id: ""
	I1030 19:48:08.463581  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.463591  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:08.463597  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:08.463654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:08.500159  447486 cri.go:89] found id: ""
	I1030 19:48:08.500186  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.500195  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:08.500200  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:08.500253  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:08.535670  447486 cri.go:89] found id: ""
	I1030 19:48:08.535701  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.535710  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:08.535717  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:08.535785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:08.572921  447486 cri.go:89] found id: ""
	I1030 19:48:08.572958  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.572968  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:08.572975  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:08.573052  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:08.610873  447486 cri.go:89] found id: ""
	I1030 19:48:08.610908  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.610918  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:08.610924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:08.610978  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:08.645430  447486 cri.go:89] found id: ""
	I1030 19:48:08.645458  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.645466  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:08.645475  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:08.645528  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:08.681212  447486 cri.go:89] found id: ""
	I1030 19:48:08.681246  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.681258  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:08.681266  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:08.681332  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:08.716619  447486 cri.go:89] found id: ""
	I1030 19:48:08.716651  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.716661  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:08.716671  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:08.716682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:08.794090  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:08.794134  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.833209  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:08.833251  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:08.884781  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:08.884817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:08.898556  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:08.898586  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:08.967713  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.468230  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:11.482593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:11.482660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:11.518191  447486 cri.go:89] found id: ""
	I1030 19:48:11.518225  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.518235  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:11.518242  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:11.518295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:11.557199  447486 cri.go:89] found id: ""
	I1030 19:48:11.557229  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.557237  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:11.557252  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:11.557323  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:11.595605  447486 cri.go:89] found id: ""
	I1030 19:48:11.595638  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.595650  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:11.595664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:11.595732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:11.634253  447486 cri.go:89] found id: ""
	I1030 19:48:11.634281  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.634295  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:11.634301  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:11.634358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:11.671138  447486 cri.go:89] found id: ""
	I1030 19:48:11.671167  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.671176  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:11.671183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:11.671238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:11.707202  447486 cri.go:89] found id: ""
	I1030 19:48:11.707228  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.707237  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:11.707243  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:11.707302  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:11.745514  447486 cri.go:89] found id: ""
	I1030 19:48:11.745549  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.745561  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:11.745570  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:11.745640  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:11.781403  447486 cri.go:89] found id: ""
	I1030 19:48:11.781438  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.781449  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:11.781458  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:11.781471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:10.486349  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:12.980881  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:09.440261  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.938439  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.332911  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.334382  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.832934  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:11.832972  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:11.853498  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:11.853545  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:11.949365  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.949389  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:11.949405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:12.033776  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:12.033823  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.579536  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:14.593497  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:14.593579  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:14.627853  447486 cri.go:89] found id: ""
	I1030 19:48:14.627886  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.627895  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:14.627902  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:14.627953  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:14.662356  447486 cri.go:89] found id: ""
	I1030 19:48:14.662386  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.662398  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:14.662406  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:14.662481  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:14.699334  447486 cri.go:89] found id: ""
	I1030 19:48:14.699370  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.699382  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:14.699390  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:14.699457  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:14.733884  447486 cri.go:89] found id: ""
	I1030 19:48:14.733924  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.733937  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:14.733946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:14.734025  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:14.775208  447486 cri.go:89] found id: ""
	I1030 19:48:14.775240  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.775249  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:14.775256  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:14.775315  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:14.809663  447486 cri.go:89] found id: ""
	I1030 19:48:14.809695  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.809704  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:14.809711  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:14.809778  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:14.844963  447486 cri.go:89] found id: ""
	I1030 19:48:14.844996  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.845006  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:14.845014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:14.845084  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:14.881236  447486 cri.go:89] found id: ""
	I1030 19:48:14.881273  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.881283  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:14.881293  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:14.881305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:14.933792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:14.933830  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:14.948038  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:14.948065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:15.023497  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:15.023519  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:15.023532  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:15.105682  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:15.105741  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.980949  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.981063  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.940399  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.438545  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:15.834158  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.332452  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:17.646238  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:17.665366  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:17.665455  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:17.707729  447486 cri.go:89] found id: ""
	I1030 19:48:17.707783  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.707796  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:17.707805  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:17.707883  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:17.759922  447486 cri.go:89] found id: ""
	I1030 19:48:17.759959  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.759972  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:17.759980  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:17.760049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:17.807635  447486 cri.go:89] found id: ""
	I1030 19:48:17.807671  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.807683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:17.807695  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:17.807770  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:17.844205  447486 cri.go:89] found id: ""
	I1030 19:48:17.844236  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.844247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:17.844255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:17.844326  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:17.879079  447486 cri.go:89] found id: ""
	I1030 19:48:17.879113  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.879125  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:17.879134  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:17.879202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:17.916548  447486 cri.go:89] found id: ""
	I1030 19:48:17.916584  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.916594  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:17.916601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:17.916654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:17.950597  447486 cri.go:89] found id: ""
	I1030 19:48:17.950626  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.950635  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:17.950640  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:17.950695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:17.985924  447486 cri.go:89] found id: ""
	I1030 19:48:17.985957  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.985968  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:17.985980  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:17.985996  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:18.066211  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:18.066250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:18.107228  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:18.107279  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:18.157508  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:18.157543  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.172208  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:18.172243  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:18.248100  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:20.748681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:20.763369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:20.763445  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:20.804288  447486 cri.go:89] found id: ""
	I1030 19:48:20.804323  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.804336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:20.804343  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:20.804410  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:20.838925  447486 cri.go:89] found id: ""
	I1030 19:48:20.838964  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.838973  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:20.838979  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:20.839030  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:20.873560  447486 cri.go:89] found id: ""
	I1030 19:48:20.873596  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.873608  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:20.873617  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:20.873681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:20.908670  447486 cri.go:89] found id: ""
	I1030 19:48:20.908705  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.908716  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:20.908723  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:20.908791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:20.945901  447486 cri.go:89] found id: ""
	I1030 19:48:20.945929  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.945937  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:20.945943  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:20.945991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:20.980184  447486 cri.go:89] found id: ""
	I1030 19:48:20.980216  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.980227  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:20.980236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:20.980299  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:21.024243  447486 cri.go:89] found id: ""
	I1030 19:48:21.024272  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.024284  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:21.024293  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:21.024366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:21.063315  447486 cri.go:89] found id: ""
	I1030 19:48:21.063348  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.063358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:21.063370  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:21.063387  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:21.130434  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:21.130463  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:21.130480  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:21.209067  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:21.209107  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:21.251005  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:21.251035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:21.303365  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:21.303402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.981952  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.982372  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.439921  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.939869  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.940058  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.333700  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.833845  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.834560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:23.817700  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:23.831060  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:23.831133  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:23.864299  447486 cri.go:89] found id: ""
	I1030 19:48:23.864334  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.864346  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:23.864354  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:23.864420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:23.900815  447486 cri.go:89] found id: ""
	I1030 19:48:23.900844  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.900854  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:23.900869  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:23.900929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:23.939888  447486 cri.go:89] found id: ""
	I1030 19:48:23.939917  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.939928  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:23.939936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:23.939999  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:23.975359  447486 cri.go:89] found id: ""
	I1030 19:48:23.975387  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.975395  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:23.975401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:23.975452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:24.012779  447486 cri.go:89] found id: ""
	I1030 19:48:24.012819  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.012832  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:24.012840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:24.012908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:24.048853  447486 cri.go:89] found id: ""
	I1030 19:48:24.048890  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.048903  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:24.048912  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:24.048979  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:24.084744  447486 cri.go:89] found id: ""
	I1030 19:48:24.084784  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.084797  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:24.084806  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:24.084860  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:24.121719  447486 cri.go:89] found id: ""
	I1030 19:48:24.121757  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.121767  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:24.121777  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:24.121791  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:24.178691  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:24.178733  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:24.192885  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:24.192916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:24.268771  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:24.268815  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:24.268832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:24.349663  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:24.349699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:23.481516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:25.481700  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.481886  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.940106  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.940309  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.334165  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.834162  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.887325  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:26.900480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:26.900558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:26.936157  447486 cri.go:89] found id: ""
	I1030 19:48:26.936188  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.936200  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:26.936207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:26.936278  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:26.975580  447486 cri.go:89] found id: ""
	I1030 19:48:26.975615  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.975626  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:26.975633  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:26.975705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:27.010549  447486 cri.go:89] found id: ""
	I1030 19:48:27.010579  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.010592  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:27.010600  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:27.010659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:27.047505  447486 cri.go:89] found id: ""
	I1030 19:48:27.047541  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.047553  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:27.047561  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:27.047628  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:27.083379  447486 cri.go:89] found id: ""
	I1030 19:48:27.083409  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.083420  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:27.083429  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:27.083492  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:27.117912  447486 cri.go:89] found id: ""
	I1030 19:48:27.117954  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.117967  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:27.117976  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:27.118049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:27.151721  447486 cri.go:89] found id: ""
	I1030 19:48:27.151749  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.151758  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:27.151765  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:27.151817  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:27.188940  447486 cri.go:89] found id: ""
	I1030 19:48:27.188981  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.188989  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:27.188999  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:27.189011  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:27.243926  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:27.243960  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:27.258702  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:27.258731  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:27.326983  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:27.327023  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:27.327041  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:27.410761  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:27.410808  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.953219  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:29.967972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:29.968078  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:30.003975  447486 cri.go:89] found id: ""
	I1030 19:48:30.004004  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.004014  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:30.004023  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:30.004097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:30.041732  447486 cri.go:89] found id: ""
	I1030 19:48:30.041768  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.041780  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:30.041787  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:30.041863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:30.078262  447486 cri.go:89] found id: ""
	I1030 19:48:30.078297  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.078308  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:30.078315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:30.078379  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:30.116100  447486 cri.go:89] found id: ""
	I1030 19:48:30.116137  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.116149  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:30.116157  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:30.116229  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:30.150925  447486 cri.go:89] found id: ""
	I1030 19:48:30.150953  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.150964  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:30.150972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:30.151041  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:30.192188  447486 cri.go:89] found id: ""
	I1030 19:48:30.192219  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.192230  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:30.192237  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:30.192314  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:30.231144  447486 cri.go:89] found id: ""
	I1030 19:48:30.231180  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.231192  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:30.231200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:30.231277  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:30.271198  447486 cri.go:89] found id: ""
	I1030 19:48:30.271228  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.271242  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:30.271265  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:30.271277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:30.322750  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:30.322792  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:30.337745  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:30.337774  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:30.417198  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:30.417224  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:30.417240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:30.503327  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:30.503364  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.982893  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.482051  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.440509  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:31.939517  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.333571  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.833482  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:33.047719  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:33.062330  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:33.062395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:33.101049  447486 cri.go:89] found id: ""
	I1030 19:48:33.101088  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.101101  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:33.101108  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:33.101175  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:33.135236  447486 cri.go:89] found id: ""
	I1030 19:48:33.135268  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.135279  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:33.135286  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:33.135357  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:33.169279  447486 cri.go:89] found id: ""
	I1030 19:48:33.169314  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.169325  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:33.169333  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:33.169401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:33.203336  447486 cri.go:89] found id: ""
	I1030 19:48:33.203380  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.203392  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:33.203401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:33.203470  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:33.238223  447486 cri.go:89] found id: ""
	I1030 19:48:33.238258  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.238270  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:33.238279  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:33.238345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:33.272891  447486 cri.go:89] found id: ""
	I1030 19:48:33.272925  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.272937  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:33.272946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:33.273014  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:33.312452  447486 cri.go:89] found id: ""
	I1030 19:48:33.312480  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.312489  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:33.312496  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:33.312547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:33.349041  447486 cri.go:89] found id: ""
	I1030 19:48:33.349076  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.349091  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:33.349104  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:33.349130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:33.430888  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:33.430940  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.469414  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:33.469444  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:33.518989  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:33.519022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:33.532656  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:33.532690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:33.605896  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.106207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:36.120564  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:36.120646  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:36.156854  447486 cri.go:89] found id: ""
	I1030 19:48:36.156887  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.156900  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:36.156909  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:36.156988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:36.195027  447486 cri.go:89] found id: ""
	I1030 19:48:36.195059  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.195072  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:36.195080  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:36.195150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:36.235639  447486 cri.go:89] found id: ""
	I1030 19:48:36.235672  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.235683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:36.235692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:36.235758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:36.281659  447486 cri.go:89] found id: ""
	I1030 19:48:36.281693  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.281702  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:36.281709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:36.281762  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:36.315427  447486 cri.go:89] found id: ""
	I1030 19:48:36.315454  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.315463  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:36.315469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:36.315531  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:36.353084  447486 cri.go:89] found id: ""
	I1030 19:48:36.353110  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.353120  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:36.353126  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:36.353197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:36.388497  447486 cri.go:89] found id: ""
	I1030 19:48:36.388533  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.388545  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:36.388553  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:36.388616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:36.423625  447486 cri.go:89] found id: ""
	I1030 19:48:36.423658  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.423667  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:36.423676  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:36.423691  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:36.476722  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:36.476757  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:36.490669  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:36.490700  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:36.558587  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.558621  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:36.558639  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:36.635606  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:36.635654  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:34.482414  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.981552  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.439796  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.938335  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:37.333231  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.333707  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.174007  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:39.187709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:39.187786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:39.226131  447486 cri.go:89] found id: ""
	I1030 19:48:39.226165  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.226177  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:39.226185  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:39.226265  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:39.265963  447486 cri.go:89] found id: ""
	I1030 19:48:39.266003  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.266016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:39.266024  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:39.266092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:39.302586  447486 cri.go:89] found id: ""
	I1030 19:48:39.302624  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.302637  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:39.302645  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:39.302710  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:39.347869  447486 cri.go:89] found id: ""
	I1030 19:48:39.347903  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.347916  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:39.347924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:39.347994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:39.384252  447486 cri.go:89] found id: ""
	I1030 19:48:39.384280  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.384288  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:39.384294  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:39.384347  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:39.418847  447486 cri.go:89] found id: ""
	I1030 19:48:39.418876  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.418885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:39.418891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:39.418950  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:39.458408  447486 cri.go:89] found id: ""
	I1030 19:48:39.458454  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.458467  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:39.458480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:39.458567  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:39.493889  447486 cri.go:89] found id: ""
	I1030 19:48:39.493923  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.493934  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:39.493946  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:39.493959  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:39.548692  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:39.548746  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:39.562083  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:39.562110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:39.633822  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:39.633845  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:39.633857  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:39.711765  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:39.711814  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:39.482010  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.981380  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:38.939254  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:40.940318  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.832456  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.832780  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:42.254337  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:42.268137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:42.268202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:42.303383  447486 cri.go:89] found id: ""
	I1030 19:48:42.303418  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.303428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:42.303434  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:42.303501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:42.349405  447486 cri.go:89] found id: ""
	I1030 19:48:42.349437  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.349447  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:42.349453  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:42.349504  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:42.384317  447486 cri.go:89] found id: ""
	I1030 19:48:42.384353  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.384363  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:42.384369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:42.384424  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:42.418712  447486 cri.go:89] found id: ""
	I1030 19:48:42.418759  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.418768  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:42.418775  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:42.418833  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:42.454234  447486 cri.go:89] found id: ""
	I1030 19:48:42.454270  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.454280  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:42.454288  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:42.454362  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:42.488813  447486 cri.go:89] found id: ""
	I1030 19:48:42.488845  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.488855  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:42.488863  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:42.488929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:42.525883  447486 cri.go:89] found id: ""
	I1030 19:48:42.525917  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.525929  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:42.525938  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:42.526006  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:42.561197  447486 cri.go:89] found id: ""
	I1030 19:48:42.561233  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.561246  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:42.561259  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:42.561275  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.599818  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:42.599854  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:42.654341  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:42.654382  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:42.668163  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:42.668188  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:42.739630  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:42.739659  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:42.739671  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.316154  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:45.330372  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:45.330454  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:45.369093  447486 cri.go:89] found id: ""
	I1030 19:48:45.369125  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.369135  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:45.369141  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:45.369192  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:45.407681  447486 cri.go:89] found id: ""
	I1030 19:48:45.407715  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.407726  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:45.407732  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:45.407787  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:45.444445  447486 cri.go:89] found id: ""
	I1030 19:48:45.444474  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.444482  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:45.444488  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:45.444539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:45.481538  447486 cri.go:89] found id: ""
	I1030 19:48:45.481570  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.481583  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:45.481591  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:45.481654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:45.515088  447486 cri.go:89] found id: ""
	I1030 19:48:45.515123  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.515132  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:45.515139  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:45.515195  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:45.550085  447486 cri.go:89] found id: ""
	I1030 19:48:45.550133  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.550145  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:45.550152  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:45.550214  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:45.583950  447486 cri.go:89] found id: ""
	I1030 19:48:45.583985  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.583999  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:45.584008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:45.584082  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:45.617320  447486 cri.go:89] found id: ""
	I1030 19:48:45.617349  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.617358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:45.617369  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:45.617389  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:45.668792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:45.668833  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:45.683144  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:45.683178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:45.758707  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:45.758732  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:45.758744  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.833807  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:45.833837  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:43.982806  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:46.480452  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.440702  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.938267  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:47.938396  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.833319  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.332420  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.374096  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:48.387812  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:48.387903  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:48.426958  447486 cri.go:89] found id: ""
	I1030 19:48:48.426987  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.426996  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:48.427002  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:48.427051  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:48.462216  447486 cri.go:89] found id: ""
	I1030 19:48:48.462249  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.462260  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:48.462268  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:48.462336  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:48.495666  447486 cri.go:89] found id: ""
	I1030 19:48:48.495699  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.495709  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:48.495716  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:48.495798  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:48.530653  447486 cri.go:89] found id: ""
	I1030 19:48:48.530686  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.530698  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:48.530709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:48.530777  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:48.564788  447486 cri.go:89] found id: ""
	I1030 19:48:48.564826  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.564838  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:48.564846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:48.564921  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:48.600735  447486 cri.go:89] found id: ""
	I1030 19:48:48.600772  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.600784  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:48.600793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:48.600863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:48.637063  447486 cri.go:89] found id: ""
	I1030 19:48:48.637095  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.637107  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:48.637115  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:48.637182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:48.673279  447486 cri.go:89] found id: ""
	I1030 19:48:48.673314  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.673334  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:48.673347  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:48.673362  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:48.724239  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:48.724280  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:48.738390  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:48.738425  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:48.812130  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:48.812155  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:48.812171  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:48.896253  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:48.896298  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.441155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:51.454675  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:51.454751  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:51.490464  447486 cri.go:89] found id: ""
	I1030 19:48:51.490511  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.490523  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:51.490532  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:51.490600  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:51.525364  447486 cri.go:89] found id: ""
	I1030 19:48:51.525399  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.525411  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:51.525419  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:51.525485  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:51.559028  447486 cri.go:89] found id: ""
	I1030 19:48:51.559062  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.559071  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:51.559078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:51.559139  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:51.595188  447486 cri.go:89] found id: ""
	I1030 19:48:51.595217  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.595225  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:51.595231  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:51.595300  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:51.628987  447486 cri.go:89] found id: ""
	I1030 19:48:51.629023  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.629039  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:51.629047  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:51.629119  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:51.663257  447486 cri.go:89] found id: ""
	I1030 19:48:51.663286  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.663295  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:51.663303  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:51.663368  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:51.712562  447486 cri.go:89] found id: ""
	I1030 19:48:51.712600  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.712613  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:51.712622  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:51.712684  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:51.761730  447486 cri.go:89] found id: ""
	I1030 19:48:51.761760  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.761769  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:51.761779  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:51.761794  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:51.775595  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:51.775624  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:48:48.481851  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.980723  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.982177  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:49.939273  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:51.939972  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.333451  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.333773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:54.835087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:48:51.849120  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:51.849144  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:51.849157  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:51.931364  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:51.931403  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.971195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:51.971229  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:54.525136  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:54.539137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:54.539227  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:54.574281  447486 cri.go:89] found id: ""
	I1030 19:48:54.574316  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.574339  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:54.574348  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:54.574420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:54.611109  447486 cri.go:89] found id: ""
	I1030 19:48:54.611149  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.611161  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:54.611170  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:54.611230  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:54.648396  447486 cri.go:89] found id: ""
	I1030 19:48:54.648428  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.648439  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:54.648447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:54.648510  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:54.683834  447486 cri.go:89] found id: ""
	I1030 19:48:54.683871  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.683884  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:54.683892  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:54.683954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:54.717391  447486 cri.go:89] found id: ""
	I1030 19:48:54.717421  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.717430  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:54.717436  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:54.717495  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:54.753783  447486 cri.go:89] found id: ""
	I1030 19:48:54.753812  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.753821  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:54.753827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:54.753878  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:54.788231  447486 cri.go:89] found id: ""
	I1030 19:48:54.788270  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.788282  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:54.788291  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:54.788359  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:54.823949  447486 cri.go:89] found id: ""
	I1030 19:48:54.823989  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.824001  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:54.824014  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:54.824052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:54.838936  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:54.838967  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:54.911785  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:54.911812  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:54.911825  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:54.993268  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:54.993302  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:55.032557  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:55.032588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:55.481330  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.482183  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:53.940343  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:56.439870  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.333262  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:59.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.588726  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:57.603010  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:57.603085  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:57.636499  447486 cri.go:89] found id: ""
	I1030 19:48:57.636531  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.636542  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:57.636551  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:57.636624  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:57.671698  447486 cri.go:89] found id: ""
	I1030 19:48:57.671728  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.671739  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:57.671748  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:57.671815  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:57.707387  447486 cri.go:89] found id: ""
	I1030 19:48:57.707414  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.707422  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:57.707431  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:57.707482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:57.745404  447486 cri.go:89] found id: ""
	I1030 19:48:57.745432  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.745440  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:57.745447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:57.745507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:57.784874  447486 cri.go:89] found id: ""
	I1030 19:48:57.784903  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.784912  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:57.784919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:57.784984  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:57.824663  447486 cri.go:89] found id: ""
	I1030 19:48:57.824697  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.824707  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:57.824713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:57.824773  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:57.862542  447486 cri.go:89] found id: ""
	I1030 19:48:57.862581  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.862593  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:57.862601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:57.862669  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:57.897901  447486 cri.go:89] found id: ""
	I1030 19:48:57.897935  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.897947  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:57.897959  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:57.897974  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.951898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:57.951936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:57.966282  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:57.966327  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:58.035515  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:58.035546  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:58.035562  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:58.114825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:58.114876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:00.705537  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:00.719589  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:00.719672  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:00.762299  447486 cri.go:89] found id: ""
	I1030 19:49:00.762330  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.762338  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:00.762356  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:00.762438  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:00.802228  447486 cri.go:89] found id: ""
	I1030 19:49:00.802259  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.802268  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:00.802275  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:00.802345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:00.836531  447486 cri.go:89] found id: ""
	I1030 19:49:00.836557  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.836565  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:00.836572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:00.836630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:00.869332  447486 cri.go:89] found id: ""
	I1030 19:49:00.869360  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.869369  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:00.869375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:00.869437  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:00.904643  447486 cri.go:89] found id: ""
	I1030 19:49:00.904675  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.904684  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:00.904691  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:00.904768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:00.939020  447486 cri.go:89] found id: ""
	I1030 19:49:00.939050  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.939061  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:00.939068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:00.939142  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:00.974586  447486 cri.go:89] found id: ""
	I1030 19:49:00.974625  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.974638  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:00.974646  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:00.974707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:01.009337  447486 cri.go:89] found id: ""
	I1030 19:49:01.009375  447486 logs.go:282] 0 containers: []
	W1030 19:49:01.009386  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:01.009399  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:01.009416  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:01.067087  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:01.067125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:01.081681  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:01.081713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:01.153057  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:01.153082  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:01.153096  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:01.236113  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:01.236153  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:59.981252  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.981799  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:58.938430  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:00.940905  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.333854  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.334325  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.774056  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:03.788395  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:03.788482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:03.823847  447486 cri.go:89] found id: ""
	I1030 19:49:03.823880  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.823892  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:03.823900  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:03.823973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:03.864776  447486 cri.go:89] found id: ""
	I1030 19:49:03.864807  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.864819  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:03.864827  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:03.864890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:03.912516  447486 cri.go:89] found id: ""
	I1030 19:49:03.912572  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.912585  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:03.912593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:03.912660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:03.962459  447486 cri.go:89] found id: ""
	I1030 19:49:03.962509  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.962521  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:03.962530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:03.962602  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:04.019107  447486 cri.go:89] found id: ""
	I1030 19:49:04.019143  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.019152  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:04.019159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:04.019217  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:04.054016  447486 cri.go:89] found id: ""
	I1030 19:49:04.054047  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.054056  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:04.054063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:04.054140  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:04.089907  447486 cri.go:89] found id: ""
	I1030 19:49:04.089938  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.089948  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:04.089955  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:04.090007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:04.128081  447486 cri.go:89] found id: ""
	I1030 19:49:04.128110  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.128118  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:04.128128  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:04.128142  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:04.182419  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:04.182462  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:04.196909  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:04.196941  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:04.267267  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:04.267298  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:04.267317  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:04.346826  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:04.346876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:03.984259  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.481362  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.438786  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.938707  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.939642  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.334541  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.834233  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.887266  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:06.902462  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:06.902554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:06.938850  447486 cri.go:89] found id: ""
	I1030 19:49:06.938880  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.938891  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:06.938899  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:06.938961  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:06.983284  447486 cri.go:89] found id: ""
	I1030 19:49:06.983315  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.983330  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:06.983339  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:06.983406  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:07.016332  447486 cri.go:89] found id: ""
	I1030 19:49:07.016359  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.016369  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:07.016375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:07.016428  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:07.051425  447486 cri.go:89] found id: ""
	I1030 19:49:07.051459  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.051471  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:07.051480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:07.051550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:07.083396  447486 cri.go:89] found id: ""
	I1030 19:49:07.083429  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.083437  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:07.083444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:07.083507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:07.116616  447486 cri.go:89] found id: ""
	I1030 19:49:07.116646  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.116654  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:07.116661  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:07.116728  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:07.149219  447486 cri.go:89] found id: ""
	I1030 19:49:07.149251  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.149259  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:07.149265  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:07.149318  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:07.188404  447486 cri.go:89] found id: ""
	I1030 19:49:07.188435  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.188444  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:07.188454  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:07.188468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:07.247600  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:07.247640  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:07.262196  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:07.262231  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:07.332998  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:07.333031  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:07.333048  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:07.415322  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:07.415367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:09.958278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:09.972983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:09.973068  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:10.016768  447486 cri.go:89] found id: ""
	I1030 19:49:10.016801  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.016810  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:10.016818  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:10.016885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:10.052958  447486 cri.go:89] found id: ""
	I1030 19:49:10.052992  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.053002  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:10.053009  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:10.053063  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:10.089062  447486 cri.go:89] found id: ""
	I1030 19:49:10.089094  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.089105  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:10.089120  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:10.089196  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:10.126084  447486 cri.go:89] found id: ""
	I1030 19:49:10.126114  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.126123  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:10.126130  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:10.126182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:10.171670  447486 cri.go:89] found id: ""
	I1030 19:49:10.171702  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.171712  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:10.171720  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:10.171785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:10.210243  447486 cri.go:89] found id: ""
	I1030 19:49:10.210285  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.210293  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:10.210300  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:10.210366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:10.253012  447486 cri.go:89] found id: ""
	I1030 19:49:10.253056  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.253069  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:10.253078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:10.253155  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:10.287948  447486 cri.go:89] found id: ""
	I1030 19:49:10.287999  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.288009  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:10.288021  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:10.288036  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:10.341362  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:10.341405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:10.355769  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:10.355798  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:10.429469  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:10.429500  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:10.429518  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:10.509812  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:10.509851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:08.488059  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.981606  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.982128  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.438903  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.939592  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.334087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.336238  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:14.833365  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:13.053064  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:13.069063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:13.069136  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:13.108457  447486 cri.go:89] found id: ""
	I1030 19:49:13.108492  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.108505  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:13.108513  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:13.108582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:13.146481  447486 cri.go:89] found id: ""
	I1030 19:49:13.146523  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.146534  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:13.146542  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:13.146595  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:13.187088  447486 cri.go:89] found id: ""
	I1030 19:49:13.187118  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.187129  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:13.187137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:13.187200  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:13.226913  447486 cri.go:89] found id: ""
	I1030 19:49:13.226948  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.226960  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:13.226968  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:13.227038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:13.262632  447486 cri.go:89] found id: ""
	I1030 19:49:13.262661  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.262669  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:13.262676  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:13.262726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:13.296877  447486 cri.go:89] found id: ""
	I1030 19:49:13.296906  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.296915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:13.296922  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:13.296983  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:13.334907  447486 cri.go:89] found id: ""
	I1030 19:49:13.334939  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.334949  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:13.334956  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:13.335021  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:13.369386  447486 cri.go:89] found id: ""
	I1030 19:49:13.369430  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.369443  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:13.369456  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:13.369472  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:13.423095  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:13.423130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:13.437039  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:13.437067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:13.512619  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:13.512648  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:13.512663  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:13.596982  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:13.597023  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:16.135623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:16.150407  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:16.150502  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:16.188771  447486 cri.go:89] found id: ""
	I1030 19:49:16.188811  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.188823  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:16.188832  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:16.188907  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:16.221554  447486 cri.go:89] found id: ""
	I1030 19:49:16.221589  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.221598  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:16.221604  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:16.221655  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:16.255567  447486 cri.go:89] found id: ""
	I1030 19:49:16.255595  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.255609  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:16.255616  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:16.255667  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:16.289820  447486 cri.go:89] found id: ""
	I1030 19:49:16.289855  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.289866  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:16.289874  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:16.289935  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:16.324415  447486 cri.go:89] found id: ""
	I1030 19:49:16.324449  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.324464  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:16.324471  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:16.324533  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:16.360789  447486 cri.go:89] found id: ""
	I1030 19:49:16.360825  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.360848  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:16.360856  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:16.360922  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:16.395066  447486 cri.go:89] found id: ""
	I1030 19:49:16.395093  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.395101  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:16.395107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:16.395158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:16.429220  447486 cri.go:89] found id: ""
	I1030 19:49:16.429261  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.429273  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:16.429286  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:16.429305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:16.481209  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:16.481250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:16.495353  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:16.495383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:16.563979  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:16.564006  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:16.564022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:16.645166  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:16.645205  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:15.481438  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.482846  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:15.440389  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.938724  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:16.833433  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.335773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.185478  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:19.199270  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:19.199337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:19.242426  447486 cri.go:89] found id: ""
	I1030 19:49:19.242455  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.242464  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:19.242474  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:19.242556  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:19.284061  447486 cri.go:89] found id: ""
	I1030 19:49:19.284092  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.284102  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:19.284108  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:19.284178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:19.317373  447486 cri.go:89] found id: ""
	I1030 19:49:19.317407  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.317420  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:19.317428  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:19.317491  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:19.354222  447486 cri.go:89] found id: ""
	I1030 19:49:19.354250  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.354259  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:19.354267  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:19.354329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:19.392948  447486 cri.go:89] found id: ""
	I1030 19:49:19.392980  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.392989  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:19.392996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:19.393053  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:19.438023  447486 cri.go:89] found id: ""
	I1030 19:49:19.438055  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.438066  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:19.438074  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:19.438144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:19.472179  447486 cri.go:89] found id: ""
	I1030 19:49:19.472208  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.472218  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:19.472226  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:19.472283  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:19.507164  447486 cri.go:89] found id: ""
	I1030 19:49:19.507195  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.507203  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:19.507213  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:19.507226  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:19.520898  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:19.520935  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:19.592204  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:19.592234  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:19.592263  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:19.668994  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:19.669045  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.707208  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:19.707240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:19.981085  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.981344  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.939994  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.439696  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.833592  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.333379  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.263035  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:22.276999  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:22.277089  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:22.310969  447486 cri.go:89] found id: ""
	I1030 19:49:22.311006  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.311017  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:22.311026  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:22.311097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:22.346282  447486 cri.go:89] found id: ""
	I1030 19:49:22.346311  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.346324  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:22.346332  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:22.346401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:22.384324  447486 cri.go:89] found id: ""
	I1030 19:49:22.384354  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.384372  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:22.384381  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:22.384441  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:22.419465  447486 cri.go:89] found id: ""
	I1030 19:49:22.419498  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.419509  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:22.419518  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:22.419582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:22.456161  447486 cri.go:89] found id: ""
	I1030 19:49:22.456196  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.456204  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:22.456211  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:22.456280  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:22.489075  447486 cri.go:89] found id: ""
	I1030 19:49:22.489102  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.489110  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:22.489119  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:22.489181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:22.521752  447486 cri.go:89] found id: ""
	I1030 19:49:22.521780  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.521789  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:22.521796  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:22.521847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:22.554946  447486 cri.go:89] found id: ""
	I1030 19:49:22.554985  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.554997  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:22.555010  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:22.555025  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:22.567877  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:22.567909  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:22.640062  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:22.640094  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:22.640110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:22.714946  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:22.714985  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:22.755560  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:22.755595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.306379  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:25.320883  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:25.320963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:25.356737  447486 cri.go:89] found id: ""
	I1030 19:49:25.356771  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.356782  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:25.356791  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:25.356856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:25.393371  447486 cri.go:89] found id: ""
	I1030 19:49:25.393409  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.393420  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:25.393429  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:25.393500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:25.428379  447486 cri.go:89] found id: ""
	I1030 19:49:25.428411  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.428425  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:25.428433  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:25.428505  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:25.473516  447486 cri.go:89] found id: ""
	I1030 19:49:25.473551  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.473562  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:25.473572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:25.473649  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:25.512508  447486 cri.go:89] found id: ""
	I1030 19:49:25.512535  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.512544  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:25.512550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:25.512611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:25.547646  447486 cri.go:89] found id: ""
	I1030 19:49:25.547691  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.547705  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:25.547713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:25.547782  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:25.582314  447486 cri.go:89] found id: ""
	I1030 19:49:25.582347  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.582356  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:25.582364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:25.582415  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:25.617305  447486 cri.go:89] found id: ""
	I1030 19:49:25.617343  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.617354  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:25.617367  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:25.617383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:25.658245  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:25.658283  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.710559  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:25.710598  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:25.724961  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:25.724995  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:25.796252  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:25.796283  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:25.796300  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:23.984899  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:25.985999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.939599  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:27.440032  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:26.334407  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.334588  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.374633  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:28.389468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:28.389549  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:28.425747  447486 cri.go:89] found id: ""
	I1030 19:49:28.425780  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.425792  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:28.425800  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:28.425956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:28.465221  447486 cri.go:89] found id: ""
	I1030 19:49:28.465258  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.465291  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:28.465303  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:28.465371  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:28.504184  447486 cri.go:89] found id: ""
	I1030 19:49:28.504217  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.504230  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:28.504240  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:28.504295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:28.536198  447486 cri.go:89] found id: ""
	I1030 19:49:28.536234  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.536247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:28.536255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:28.536340  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:28.572194  447486 cri.go:89] found id: ""
	I1030 19:49:28.572228  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.572240  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:28.572248  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:28.572312  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:28.608794  447486 cri.go:89] found id: ""
	I1030 19:49:28.608826  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.608838  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:28.608846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:28.608914  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:28.641664  447486 cri.go:89] found id: ""
	I1030 19:49:28.641698  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.641706  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:28.641714  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:28.641768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:28.675756  447486 cri.go:89] found id: ""
	I1030 19:49:28.675790  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.675800  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:28.675812  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:28.675829  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:28.690203  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:28.690237  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:28.755647  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:28.755674  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:28.755690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.837116  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:28.837149  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:28.877195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:28.877232  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.428091  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:31.442537  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:31.442619  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:31.479911  447486 cri.go:89] found id: ""
	I1030 19:49:31.479942  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.479953  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:31.479961  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:31.480029  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:31.517015  447486 cri.go:89] found id: ""
	I1030 19:49:31.517042  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.517050  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:31.517056  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:31.517107  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:31.549858  447486 cri.go:89] found id: ""
	I1030 19:49:31.549891  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.549900  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:31.549907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:31.549971  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:31.583490  447486 cri.go:89] found id: ""
	I1030 19:49:31.583524  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.583536  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:31.583551  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:31.583618  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:31.618270  447486 cri.go:89] found id: ""
	I1030 19:49:31.618308  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.618320  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:31.618328  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:31.618397  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:31.655416  447486 cri.go:89] found id: ""
	I1030 19:49:31.655448  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.655460  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:31.655468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:31.655530  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:31.689708  447486 cri.go:89] found id: ""
	I1030 19:49:31.689740  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.689751  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:31.689759  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:31.689823  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:31.724179  447486 cri.go:89] found id: ""
	I1030 19:49:31.724208  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.724219  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:31.724233  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:31.724249  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.774900  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:31.774939  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:31.788606  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:31.788635  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:28.481673  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.980999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:32.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:29.938506  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:31.940276  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.834322  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:33.333091  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:49:31.861360  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:31.861385  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:31.861398  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:31.935856  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:31.935896  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.477313  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:34.491530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:34.491597  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:34.525105  447486 cri.go:89] found id: ""
	I1030 19:49:34.525136  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.525145  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:34.525153  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:34.525215  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:34.560449  447486 cri.go:89] found id: ""
	I1030 19:49:34.560483  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.560495  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:34.560503  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:34.560558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:34.595278  447486 cri.go:89] found id: ""
	I1030 19:49:34.595325  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.595335  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:34.595342  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:34.595395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:34.628486  447486 cri.go:89] found id: ""
	I1030 19:49:34.628521  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.628533  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:34.628542  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:34.628614  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:34.663410  447486 cri.go:89] found id: ""
	I1030 19:49:34.663438  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.663448  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:34.663456  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:34.663520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:34.697053  447486 cri.go:89] found id: ""
	I1030 19:49:34.697086  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.697099  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:34.697107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:34.697178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:34.730910  447486 cri.go:89] found id: ""
	I1030 19:49:34.730943  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.730955  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:34.730963  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:34.731034  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:34.765725  447486 cri.go:89] found id: ""
	I1030 19:49:34.765762  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.765774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:34.765786  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:34.765807  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.802750  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:34.802786  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:34.853576  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:34.853614  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:34.868102  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:34.868139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:34.939985  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:34.940015  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:34.940027  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:35.480658  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.481068  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:34.442576  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:36.940088  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:35.333400  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.334425  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.833330  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.516479  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:37.529386  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:37.529453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:37.565889  447486 cri.go:89] found id: ""
	I1030 19:49:37.565923  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.565936  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:37.565945  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:37.566007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:37.598771  447486 cri.go:89] found id: ""
	I1030 19:49:37.598801  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.598811  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:37.598817  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:37.598869  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:37.632678  447486 cri.go:89] found id: ""
	I1030 19:49:37.632705  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.632714  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:37.632735  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:37.632795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:37.666642  447486 cri.go:89] found id: ""
	I1030 19:49:37.666673  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.666682  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:37.666688  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:37.666748  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:37.701203  447486 cri.go:89] found id: ""
	I1030 19:49:37.701233  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.701242  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:37.701249  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:37.701324  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:37.735614  447486 cri.go:89] found id: ""
	I1030 19:49:37.735649  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.735661  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:37.735669  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:37.735738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:37.771381  447486 cri.go:89] found id: ""
	I1030 19:49:37.771418  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.771430  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:37.771439  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:37.771501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:37.807870  447486 cri.go:89] found id: ""
	I1030 19:49:37.807908  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.807922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:37.807935  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:37.807952  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:37.860334  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:37.860367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:37.874340  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:37.874371  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:37.952874  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:37.952903  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:37.952916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:38.045318  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:38.045356  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:40.591278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:40.604970  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:40.605050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:40.639839  447486 cri.go:89] found id: ""
	I1030 19:49:40.639869  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.639880  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:40.639889  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:40.639952  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:40.674046  447486 cri.go:89] found id: ""
	I1030 19:49:40.674077  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.674087  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:40.674093  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:40.674164  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:40.710759  447486 cri.go:89] found id: ""
	I1030 19:49:40.710794  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.710806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:40.710815  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:40.710880  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:40.752439  447486 cri.go:89] found id: ""
	I1030 19:49:40.752471  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.752484  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:40.752493  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:40.752548  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:40.787985  447486 cri.go:89] found id: ""
	I1030 19:49:40.788021  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.788034  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:40.788042  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:40.788102  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:40.829282  447486 cri.go:89] found id: ""
	I1030 19:49:40.829320  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.829333  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:40.829341  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:40.829409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:40.863911  447486 cri.go:89] found id: ""
	I1030 19:49:40.863944  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.863953  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:40.863959  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:40.864026  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:40.901239  447486 cri.go:89] found id: ""
	I1030 19:49:40.901275  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.901287  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:40.901300  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:40.901321  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:40.955283  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:40.955323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:40.968733  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:40.968766  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:41.040213  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:41.040242  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:41.040256  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:41.125992  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:41.126035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:39.481593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.483403  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.441009  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.939182  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.834082  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:44.332428  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.667949  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:43.681633  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:43.681705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:43.725038  447486 cri.go:89] found id: ""
	I1030 19:49:43.725076  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.725085  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:43.725091  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:43.725149  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.761438  447486 cri.go:89] found id: ""
	I1030 19:49:43.761473  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.761486  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:43.761494  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:43.761566  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:43.795299  447486 cri.go:89] found id: ""
	I1030 19:49:43.795335  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.795347  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:43.795355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:43.795431  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:43.830545  447486 cri.go:89] found id: ""
	I1030 19:49:43.830582  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.830594  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:43.830601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:43.830670  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:43.867632  447486 cri.go:89] found id: ""
	I1030 19:49:43.867664  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.867676  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:43.867684  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:43.867753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:43.901315  447486 cri.go:89] found id: ""
	I1030 19:49:43.901346  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.901355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:43.901361  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:43.901412  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:43.934928  447486 cri.go:89] found id: ""
	I1030 19:49:43.934963  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.934975  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:43.934983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:43.935048  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:43.975407  447486 cri.go:89] found id: ""
	I1030 19:49:43.975441  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.975451  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:43.975472  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:43.975497  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:44.019281  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:44.019310  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:44.072363  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:44.072402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:44.085508  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:44.085538  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:44.159634  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:44.159666  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:44.159682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:46.739662  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:46.753190  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:46.753252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:46.790167  447486 cri.go:89] found id: ""
	I1030 19:49:46.790202  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.790211  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:46.790217  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:46.790272  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.988689  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.481139  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.939246  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.438847  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.333066  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.335463  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.828187  447486 cri.go:89] found id: ""
	I1030 19:49:46.828221  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.828230  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:46.828237  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:46.828305  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:46.865499  447486 cri.go:89] found id: ""
	I1030 19:49:46.865539  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.865551  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:46.865559  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:46.865612  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:46.899591  447486 cri.go:89] found id: ""
	I1030 19:49:46.899616  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.899625  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:46.899632  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:46.899681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:46.934818  447486 cri.go:89] found id: ""
	I1030 19:49:46.934850  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.934860  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:46.934868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:46.934933  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:46.971298  447486 cri.go:89] found id: ""
	I1030 19:49:46.971328  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.971340  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:46.971349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:46.971418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:47.010783  447486 cri.go:89] found id: ""
	I1030 19:49:47.010814  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.010825  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:47.010832  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:47.010896  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:47.044343  447486 cri.go:89] found id: ""
	I1030 19:49:47.044380  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.044392  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:47.044405  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:47.044421  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:47.094425  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:47.094459  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:47.110339  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:47.110368  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:47.183262  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:47.183290  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:47.183305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:47.262611  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:47.262651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:49.808195  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:49.821889  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:49.821963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:49.857296  447486 cri.go:89] found id: ""
	I1030 19:49:49.857339  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.857351  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:49.857359  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:49.857413  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:49.892614  447486 cri.go:89] found id: ""
	I1030 19:49:49.892648  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.892660  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:49.892668  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:49.892732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:49.929835  447486 cri.go:89] found id: ""
	I1030 19:49:49.929862  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.929871  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:49.929878  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:49.929940  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:49.965341  447486 cri.go:89] found id: ""
	I1030 19:49:49.965371  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.965379  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:49.965392  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:49.965449  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:50.000134  447486 cri.go:89] found id: ""
	I1030 19:49:50.000165  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.000177  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:50.000188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:50.000259  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:50.033848  447486 cri.go:89] found id: ""
	I1030 19:49:50.033876  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.033885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:50.033891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:50.033943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:50.073315  447486 cri.go:89] found id: ""
	I1030 19:49:50.073344  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.073354  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:50.073360  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:50.073421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:50.114232  447486 cri.go:89] found id: ""
	I1030 19:49:50.114266  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.114277  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:50.114290  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:50.114311  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:50.185407  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:50.185434  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:50.185448  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:50.270447  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:50.270494  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:50.308825  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:50.308855  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:50.363376  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:50.363417  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:48.982027  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:51.482972  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.439801  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.939120  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.833062  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.833132  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.834352  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.878475  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:52.892013  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:52.892088  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:52.928085  447486 cri.go:89] found id: ""
	I1030 19:49:52.928117  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.928126  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:52.928132  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:52.928185  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:52.963377  447486 cri.go:89] found id: ""
	I1030 19:49:52.963413  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.963426  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:52.963434  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:52.963493  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:53.000799  447486 cri.go:89] found id: ""
	I1030 19:49:53.000825  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.000834  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:53.000840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:53.000912  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:53.037429  447486 cri.go:89] found id: ""
	I1030 19:49:53.037463  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.037472  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:53.037478  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:53.037534  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:53.072392  447486 cri.go:89] found id: ""
	I1030 19:49:53.072425  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.072433  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:53.072446  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:53.072520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:53.108925  447486 cri.go:89] found id: ""
	I1030 19:49:53.108957  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.108970  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:53.108978  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:53.109050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:53.145409  447486 cri.go:89] found id: ""
	I1030 19:49:53.145445  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.145457  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:53.145466  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:53.145536  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:53.180756  447486 cri.go:89] found id: ""
	I1030 19:49:53.180784  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.180793  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:53.180803  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:53.180817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:53.234960  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:53.235010  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:53.249224  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:53.249255  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:53.313223  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:53.313245  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:53.313264  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:53.399715  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:53.399758  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.944332  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:55.961546  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:55.961616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:56.020603  447486 cri.go:89] found id: ""
	I1030 19:49:56.020634  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.020647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:56.020654  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:56.020725  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:56.065134  447486 cri.go:89] found id: ""
	I1030 19:49:56.065162  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.065170  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:56.065176  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:56.065239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:56.101358  447486 cri.go:89] found id: ""
	I1030 19:49:56.101386  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.101396  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:56.101405  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:56.101473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:56.135762  447486 cri.go:89] found id: ""
	I1030 19:49:56.135795  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.135805  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:56.135811  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:56.135863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:56.171336  447486 cri.go:89] found id: ""
	I1030 19:49:56.171371  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.171383  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:56.171391  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:56.171461  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:56.205643  447486 cri.go:89] found id: ""
	I1030 19:49:56.205674  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.205685  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:56.205693  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:56.205759  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:56.240853  447486 cri.go:89] found id: ""
	I1030 19:49:56.240885  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.240894  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:56.240901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:56.240973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:56.276577  447486 cri.go:89] found id: ""
	I1030 19:49:56.276612  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.276623  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:56.276636  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:56.276651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:56.328180  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:56.328220  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:56.341895  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:56.341923  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:56.414492  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:56.414523  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:56.414540  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:56.498439  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:56.498498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:53.980916  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:55.983077  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:53.439070  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.940107  446887 pod_ready.go:82] duration metric: took 4m0.007533629s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:49:54.940137  446887 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:49:54.940149  446887 pod_ready.go:39] duration metric: took 4m6.552777198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:49:54.940170  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:49:54.940206  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:54.940264  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:54.992682  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:54.992715  446887 cri.go:89] found id: ""
	I1030 19:49:54.992727  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:54.992790  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:54.997251  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:54.997313  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:55.034504  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.034542  446887 cri.go:89] found id: ""
	I1030 19:49:55.034552  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:55.034616  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.039551  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:55.039624  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:55.083294  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.083326  446887 cri.go:89] found id: ""
	I1030 19:49:55.083336  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:55.083407  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.087866  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:55.087932  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:55.125250  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.125353  446887 cri.go:89] found id: ""
	I1030 19:49:55.125372  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:55.125446  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.130688  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:55.130747  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:55.168792  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.168814  446887 cri.go:89] found id: ""
	I1030 19:49:55.168822  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:55.168877  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.173360  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:55.173424  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:55.209566  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.209590  446887 cri.go:89] found id: ""
	I1030 19:49:55.209599  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:55.209659  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.214190  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:55.214263  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:55.257056  446887 cri.go:89] found id: ""
	I1030 19:49:55.257091  446887 logs.go:282] 0 containers: []
	W1030 19:49:55.257103  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:55.257111  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:55.257165  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:55.300194  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.300224  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.300229  446887 cri.go:89] found id: ""
	I1030 19:49:55.300238  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:55.300290  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.304750  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.309249  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:49:55.309276  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.363959  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:49:55.363994  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.412667  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:49:55.412703  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.455381  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:55.455420  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.494657  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:55.494689  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.552740  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:55.552773  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:55.627724  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:55.627765  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:55.642263  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:49:55.642300  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:55.691079  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:55.691111  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.730111  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:49:55.730151  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.785155  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:55.785189  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:55.924592  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:55.924633  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.970229  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:55.970267  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:57.333378  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.334394  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.039071  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.053648  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.053722  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.097620  447486 cri.go:89] found id: ""
	I1030 19:49:59.097650  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.097661  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:59.097669  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.097738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.139136  447486 cri.go:89] found id: ""
	I1030 19:49:59.139176  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.139188  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:59.139199  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.139270  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.180322  447486 cri.go:89] found id: ""
	I1030 19:49:59.180361  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.180371  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:59.180384  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.180453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.217374  447486 cri.go:89] found id: ""
	I1030 19:49:59.217422  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.217434  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:59.217443  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.217498  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.257857  447486 cri.go:89] found id: ""
	I1030 19:49:59.257884  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.257894  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:59.257901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.257968  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.297679  447486 cri.go:89] found id: ""
	I1030 19:49:59.297713  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.297724  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:59.297733  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.297795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.341469  447486 cri.go:89] found id: ""
	I1030 19:49:59.341499  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.341509  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.341517  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:59.341587  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:59.381677  447486 cri.go:89] found id: ""
	I1030 19:49:59.381704  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.381713  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:59.381723  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.381735  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.441396  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.441428  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.457105  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:59.457139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:59.532023  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:59.532051  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.532064  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:59.621685  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:59.621720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:58.481425  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:00.481912  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.482130  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.010542  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.027463  446887 api_server.go:72] duration metric: took 4m17.923507495s to wait for apiserver process to appear ...
	I1030 19:49:59.027488  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:49:59.027524  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.027571  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.066364  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:59.066391  446887 cri.go:89] found id: ""
	I1030 19:49:59.066401  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:59.066463  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.072454  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.072535  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.118043  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:59.118072  446887 cri.go:89] found id: ""
	I1030 19:49:59.118081  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:59.118142  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.122806  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.122883  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.167475  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:59.167500  446887 cri.go:89] found id: ""
	I1030 19:49:59.167511  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:59.167577  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.172181  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.172255  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.210384  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:59.210411  446887 cri.go:89] found id: ""
	I1030 19:49:59.210419  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:59.210473  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.216032  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.216114  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.269770  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.269791  446887 cri.go:89] found id: ""
	I1030 19:49:59.269799  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:59.269851  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.274161  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.274239  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.313907  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.313936  446887 cri.go:89] found id: ""
	I1030 19:49:59.313946  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:59.314019  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.320687  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.320766  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.367710  446887 cri.go:89] found id: ""
	I1030 19:49:59.367740  446887 logs.go:282] 0 containers: []
	W1030 19:49:59.367752  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.367759  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:59.367826  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:59.422716  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.422744  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.422750  446887 cri.go:89] found id: ""
	I1030 19:49:59.422763  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:59.422827  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.428399  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.432404  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:59.432429  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.475798  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.475839  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.548960  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.548998  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.566839  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:59.566870  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.606181  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:59.606210  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.670134  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:59.670170  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.709224  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.709253  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:00.132147  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:00.132194  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:00.181124  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:00.181171  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:00.306545  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:00.306585  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:00.352129  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:00.352169  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:00.398083  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:00.398119  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:00.439813  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:00.439851  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:02.978477  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:50:02.983776  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:50:02.984791  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:50:02.984814  446887 api_server.go:131] duration metric: took 3.957319689s to wait for apiserver health ...
	I1030 19:50:02.984822  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:50:02.984844  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.984902  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:03.024715  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:03.024745  446887 cri.go:89] found id: ""
	I1030 19:50:03.024754  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:50:03.024820  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.029121  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:03.029188  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:03.064462  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:03.064489  446887 cri.go:89] found id: ""
	I1030 19:50:03.064500  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:50:03.064564  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.068587  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:03.068665  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:03.106880  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.106902  446887 cri.go:89] found id: ""
	I1030 19:50:03.106910  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:50:03.106978  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.111313  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:03.111388  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:03.155761  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:03.155791  446887 cri.go:89] found id: ""
	I1030 19:50:03.155801  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:50:03.155864  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.160616  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:03.160686  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:03.199028  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:03.199063  446887 cri.go:89] found id: ""
	I1030 19:50:03.199074  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:50:03.199149  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.203348  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:03.203414  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:03.257739  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:03.257769  446887 cri.go:89] found id: ""
	I1030 19:50:03.257780  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:50:03.257845  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.263357  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:03.263417  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:03.309752  446887 cri.go:89] found id: ""
	I1030 19:50:03.309779  446887 logs.go:282] 0 containers: []
	W1030 19:50:03.309787  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:03.309793  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:50:03.309843  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:50:03.351570  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.351593  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.351597  446887 cri.go:89] found id: ""
	I1030 19:50:03.351605  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:50:03.351656  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.364414  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.369070  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:03.369097  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:03.385129  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:03.385161  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:01.833117  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:04.334645  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.170623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:02.184885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.184975  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:02.223811  447486 cri.go:89] found id: ""
	I1030 19:50:02.223841  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.223849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:02.223856  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:02.223908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:02.260454  447486 cri.go:89] found id: ""
	I1030 19:50:02.260481  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.260491  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:02.260497  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:02.260554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:02.296542  447486 cri.go:89] found id: ""
	I1030 19:50:02.296569  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.296577  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:02.296583  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:02.296631  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:02.332168  447486 cri.go:89] found id: ""
	I1030 19:50:02.332199  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.332211  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:02.332219  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:02.332287  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:02.366539  447486 cri.go:89] found id: ""
	I1030 19:50:02.366575  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.366586  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:02.366595  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:02.366659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:02.401859  447486 cri.go:89] found id: ""
	I1030 19:50:02.401894  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.401915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:02.401923  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:02.401991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:02.446061  447486 cri.go:89] found id: ""
	I1030 19:50:02.446097  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.446108  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:02.446116  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:02.446181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:02.488233  447486 cri.go:89] found id: ""
	I1030 19:50:02.488257  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.488265  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:02.488274  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:02.488294  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:02.544517  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:02.544554  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:02.558143  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:02.558179  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:02.628679  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:02.628706  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:02.628723  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:02.710246  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:02.710293  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.254846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:05.269536  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:05.269599  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:05.303724  447486 cri.go:89] found id: ""
	I1030 19:50:05.303753  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.303761  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:05.303767  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:05.303819  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:05.339268  447486 cri.go:89] found id: ""
	I1030 19:50:05.339301  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.339322  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:05.339330  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:05.339405  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:05.375892  447486 cri.go:89] found id: ""
	I1030 19:50:05.375923  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.375930  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:05.375936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:05.375988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:05.413197  447486 cri.go:89] found id: ""
	I1030 19:50:05.413232  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.413243  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:05.413252  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:05.413329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:05.452095  447486 cri.go:89] found id: ""
	I1030 19:50:05.452122  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.452130  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:05.452137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:05.452193  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:05.490694  447486 cri.go:89] found id: ""
	I1030 19:50:05.490731  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.490744  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:05.490753  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:05.490808  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:05.523961  447486 cri.go:89] found id: ""
	I1030 19:50:05.523992  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.524001  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:05.524008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:05.524060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:05.558631  447486 cri.go:89] found id: ""
	I1030 19:50:05.558664  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.558673  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:05.558684  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:05.558699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.596929  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:05.596958  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:05.647294  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:05.647332  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:05.661349  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:05.661377  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:05.730268  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:05.730299  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:05.730323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.434675  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:03.434708  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.474767  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:50:03.474803  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.510301  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:03.510331  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.887871  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:50:03.887912  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.930529  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:03.930563  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:03.971064  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:03.971102  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:04.040593  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:04.040632  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:04.157377  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:04.157418  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:04.205779  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:04.205816  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:04.251434  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:50:04.251470  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:04.288713  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:50:04.288747  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:06.849298  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:50:06.849329  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.849334  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.849340  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.849352  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.849358  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.849367  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.849373  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.849377  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.849384  446887 system_pods.go:74] duration metric: took 3.864557334s to wait for pod list to return data ...
	I1030 19:50:06.849394  446887 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:50:06.852015  446887 default_sa.go:45] found service account: "default"
	I1030 19:50:06.852037  446887 default_sa.go:55] duration metric: took 2.63686ms for default service account to be created ...
	I1030 19:50:06.852046  446887 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:50:06.856920  446887 system_pods.go:86] 8 kube-system pods found
	I1030 19:50:06.856945  446887 system_pods.go:89] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.856953  446887 system_pods.go:89] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.856959  446887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.856966  446887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.856972  446887 system_pods.go:89] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.856979  446887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.856996  446887 system_pods.go:89] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.857005  446887 system_pods.go:89] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.857015  446887 system_pods.go:126] duration metric: took 4.962745ms to wait for k8s-apps to be running ...
	I1030 19:50:06.857025  446887 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:50:06.857086  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:06.874176  446887 system_svc.go:56] duration metric: took 17.144628ms WaitForService to wait for kubelet
	I1030 19:50:06.874206  446887 kubeadm.go:582] duration metric: took 4m25.770253397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:50:06.874230  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:50:06.876962  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:50:06.876987  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:50:06.877004  446887 node_conditions.go:105] duration metric: took 2.768174ms to run NodePressure ...
	I1030 19:50:06.877025  446887 start.go:241] waiting for startup goroutines ...
	I1030 19:50:06.877034  446887 start.go:246] waiting for cluster config update ...
	I1030 19:50:06.877070  446887 start.go:255] writing updated cluster config ...
	I1030 19:50:06.877355  446887 ssh_runner.go:195] Run: rm -f paused
	I1030 19:50:06.927147  446887 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:50:06.929103  446887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-768989" cluster and "default" namespace by default
	I1030 19:50:04.981923  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.982630  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.834029  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.834616  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.312167  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:08.327121  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:08.327206  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:08.364871  447486 cri.go:89] found id: ""
	I1030 19:50:08.364905  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.364916  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:08.364924  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:08.364982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:08.399179  447486 cri.go:89] found id: ""
	I1030 19:50:08.399215  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.399225  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:08.399231  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:08.399286  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:08.434308  447486 cri.go:89] found id: ""
	I1030 19:50:08.434340  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.434350  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:08.434356  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:08.434409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:08.477152  447486 cri.go:89] found id: ""
	I1030 19:50:08.477184  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.477193  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:08.477204  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:08.477274  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:08.513678  447486 cri.go:89] found id: ""
	I1030 19:50:08.513706  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.513716  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:08.513725  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:08.513789  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:08.551427  447486 cri.go:89] found id: ""
	I1030 19:50:08.551459  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.551478  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:08.551485  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:08.551550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:08.584224  447486 cri.go:89] found id: ""
	I1030 19:50:08.584260  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.584272  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:08.584282  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:08.584351  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:08.617603  447486 cri.go:89] found id: ""
	I1030 19:50:08.617638  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.617649  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:08.617660  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:08.617674  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:08.694201  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:08.694229  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:08.694247  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.775457  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:08.775500  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:08.816452  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:08.816496  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:08.868077  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:08.868114  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.383130  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:11.397672  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:11.397758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:11.431923  447486 cri.go:89] found id: ""
	I1030 19:50:11.431959  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.431971  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:11.431980  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:11.432050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:11.466959  447486 cri.go:89] found id: ""
	I1030 19:50:11.466996  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.467009  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:11.467018  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:11.467093  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:11.506399  447486 cri.go:89] found id: ""
	I1030 19:50:11.506425  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.506437  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:11.506444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:11.506529  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:11.538606  447486 cri.go:89] found id: ""
	I1030 19:50:11.538635  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.538643  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:11.538649  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:11.538700  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:11.573265  447486 cri.go:89] found id: ""
	I1030 19:50:11.573296  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.573304  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:11.573310  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:11.573364  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:11.608522  447486 cri.go:89] found id: ""
	I1030 19:50:11.608549  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.608558  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:11.608569  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:11.608629  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:11.639758  447486 cri.go:89] found id: ""
	I1030 19:50:11.639784  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.639792  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:11.639797  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:11.639846  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:11.673381  447486 cri.go:89] found id: ""
	I1030 19:50:11.673414  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.673426  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:11.673439  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:11.673454  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:11.727368  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:11.727414  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.741267  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:11.741301  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:09.481159  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.483339  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.334468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:13.832615  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:50:11.808126  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:11.808158  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:11.808174  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:11.888676  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:11.888713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.431637  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:14.445315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:14.445392  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:14.482059  447486 cri.go:89] found id: ""
	I1030 19:50:14.482097  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.482110  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:14.482118  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:14.482186  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:14.520802  447486 cri.go:89] found id: ""
	I1030 19:50:14.520834  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.520843  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:14.520849  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:14.520900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:14.559965  447486 cri.go:89] found id: ""
	I1030 19:50:14.559996  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.560006  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:14.560012  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:14.560062  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:14.601831  447486 cri.go:89] found id: ""
	I1030 19:50:14.601865  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.601875  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:14.601881  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:14.601932  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:14.635307  447486 cri.go:89] found id: ""
	I1030 19:50:14.635339  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.635348  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:14.635355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:14.635418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:14.668618  447486 cri.go:89] found id: ""
	I1030 19:50:14.668648  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.668657  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:14.668664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:14.668726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:14.702597  447486 cri.go:89] found id: ""
	I1030 19:50:14.702633  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.702644  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:14.702653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:14.702715  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:14.736860  447486 cri.go:89] found id: ""
	I1030 19:50:14.736899  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.736911  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:14.736925  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:14.736942  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:14.822015  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:14.822060  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.860153  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:14.860195  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:14.912230  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:14.912269  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:14.927032  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:14.927067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:14.994401  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:13.975124  446965 pod_ready.go:82] duration metric: took 4m0.000158179s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	E1030 19:50:13.975173  446965 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" (will not retry!)
	I1030 19:50:13.975201  446965 pod_ready.go:39] duration metric: took 4m14.686087419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:13.975238  446965 kubeadm.go:597] duration metric: took 4m22.157012059s to restartPrimaryControlPlane
	W1030 19:50:13.975313  446965 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:13.975366  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:15.833986  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.835468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.494865  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:17.509934  447486 kubeadm.go:597] duration metric: took 4m3.074434895s to restartPrimaryControlPlane
	W1030 19:50:17.510016  447486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:17.510051  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:18.496415  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:18.512328  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:18.522293  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:18.532752  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:18.532772  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:18.532823  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:18.542501  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:18.542560  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:18.552660  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:18.562585  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:18.562649  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:18.572321  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.581633  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:18.581689  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.592770  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:18.602414  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:18.602477  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:18.612334  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:18.844753  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:20.333715  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:22.832817  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:24.833349  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:27.332723  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:29.335009  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:31.832584  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:33.834506  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:36.333902  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:38.833159  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:40.157555  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.182163055s)
	I1030 19:50:40.157637  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:40.174413  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:40.184817  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:40.195446  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:40.195475  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:40.195527  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:40.205509  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:40.205575  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:40.217343  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:40.227666  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:40.227729  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:40.237594  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.247151  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:40.247209  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.256854  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:40.266306  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:40.266379  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:40.276409  446965 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:40.322080  446965 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 19:50:40.322174  446965 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:50:40.433056  446965 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:50:40.433251  446965 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:50:40.433390  446965 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 19:50:40.445085  446965 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:50:40.447192  446965 out.go:235]   - Generating certificates and keys ...
	I1030 19:50:40.447301  446965 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:50:40.447395  446965 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:50:40.447512  446965 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:50:40.447600  446965 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:50:40.447735  446965 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:50:40.447825  446965 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:50:40.447912  446965 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:50:40.447999  446965 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:50:40.448108  446965 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:50:40.448208  446965 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:50:40.448266  446965 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:50:40.448345  446965 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:50:40.590735  446965 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:50:40.714139  446965 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 19:50:40.808334  446965 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:50:40.940687  446965 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:50:41.085266  446965 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:50:41.085840  446965 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:50:41.088415  446965 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:50:41.090229  446965 out.go:235]   - Booting up control plane ...
	I1030 19:50:41.090349  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:50:41.090466  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:50:41.090573  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:50:41.112262  446965 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:50:41.118809  446965 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:50:41.118919  446965 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:50:41.243915  446965 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 19:50:41.244093  446965 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 19:50:41.745362  446965 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.630697ms
	I1030 19:50:41.745513  446965 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 19:50:40.834005  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:42.834286  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:46.748431  446965 kubeadm.go:310] [api-check] The API server is healthy after 5.001587935s
	I1030 19:50:46.762271  446965 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 19:50:46.781785  446965 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 19:50:46.806338  446965 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 19:50:46.806613  446965 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-042402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 19:50:46.819762  446965 kubeadm.go:310] [bootstrap-token] Using token: k711fn.1we2gia9o31jm3ip
	I1030 19:50:46.821026  446965 out.go:235]   - Configuring RBAC rules ...
	I1030 19:50:46.821137  446965 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 19:50:46.827537  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 19:50:46.836653  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 19:50:46.844891  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 19:50:46.848423  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 19:50:46.851674  446965 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 19:50:47.157946  446965 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 19:50:47.615774  446965 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 19:50:48.154429  446965 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 19:50:48.159547  446965 kubeadm.go:310] 
	I1030 19:50:48.159636  446965 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 19:50:48.159648  446965 kubeadm.go:310] 
	I1030 19:50:48.159762  446965 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 19:50:48.159776  446965 kubeadm.go:310] 
	I1030 19:50:48.159806  446965 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 19:50:48.159880  446965 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 19:50:48.159934  446965 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 19:50:48.159944  446965 kubeadm.go:310] 
	I1030 19:50:48.160029  446965 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 19:50:48.160040  446965 kubeadm.go:310] 
	I1030 19:50:48.160123  446965 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 19:50:48.160154  446965 kubeadm.go:310] 
	I1030 19:50:48.160242  446965 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 19:50:48.160351  446965 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 19:50:48.160440  446965 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 19:50:48.160450  446965 kubeadm.go:310] 
	I1030 19:50:48.160570  446965 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 19:50:48.160652  446965 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 19:50:48.160660  446965 kubeadm.go:310] 
	I1030 19:50:48.160729  446965 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.160818  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 19:50:48.160838  446965 kubeadm.go:310] 	--control-plane 
	I1030 19:50:48.160846  446965 kubeadm.go:310] 
	I1030 19:50:48.160943  446965 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 19:50:48.160955  446965 kubeadm.go:310] 
	I1030 19:50:48.161065  446965 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.161205  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 19:50:48.162302  446965 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:48.162390  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:50:48.162408  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:50:48.164041  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:50:45.333255  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:47.334686  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:49.832993  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:48.165318  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:50:48.176702  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:50:48.199681  446965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:50:48.199776  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.199840  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-042402 minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=embed-certs-042402 minikube.k8s.io/primary=true
	I1030 19:50:48.226617  446965 ops.go:34] apiserver oom_adj: -16
	I1030 19:50:48.404620  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.905366  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.405663  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.904925  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.405082  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.905099  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.404860  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.905534  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.405432  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.905289  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:53.010770  446965 kubeadm.go:1113] duration metric: took 4.811061462s to wait for elevateKubeSystemPrivileges
	I1030 19:50:53.010818  446965 kubeadm.go:394] duration metric: took 5m1.251362756s to StartCluster
	I1030 19:50:53.010849  446965 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.010948  446965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:50:53.012997  446965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.013284  446965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:50:53.013411  446965 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:50:53.013518  446965 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-042402"
	I1030 19:50:53.013539  446965 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-042402"
	I1030 19:50:53.013539  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1030 19:50:53.013550  446965 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:50:53.013600  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013546  446965 addons.go:69] Setting default-storageclass=true in profile "embed-certs-042402"
	I1030 19:50:53.013605  446965 addons.go:69] Setting metrics-server=true in profile "embed-certs-042402"
	I1030 19:50:53.013635  446965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-042402"
	I1030 19:50:53.013642  446965 addons.go:234] Setting addon metrics-server=true in "embed-certs-042402"
	W1030 19:50:53.013650  446965 addons.go:243] addon metrics-server should already be in state true
	I1030 19:50:53.013675  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013947  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014005  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014010  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014022  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014058  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014112  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.015033  446965 out.go:177] * Verifying Kubernetes components...
	I1030 19:50:53.016527  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:50:53.030033  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I1030 19:50:53.030290  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1030 19:50:53.030618  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.030733  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.031192  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031209  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031342  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031356  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031577  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.031773  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.031801  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.032289  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1030 19:50:53.032910  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.032953  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.033170  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.033684  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.033699  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.035082  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.035104  446965 addons.go:234] Setting addon default-storageclass=true in "embed-certs-042402"
	W1030 19:50:53.035124  446965 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:50:53.035158  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.035461  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.035492  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.036666  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.036697  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.054685  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1030 19:50:53.055271  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.055621  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I1030 19:50:53.055762  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.055779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.056073  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.056192  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.056410  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.056665  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.056688  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.057099  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.057693  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.057741  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.058427  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.058756  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I1030 19:50:53.059684  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.060230  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.060253  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.060597  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.060806  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.060880  446965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:50:53.062367  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.062469  446965 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.062506  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:50:53.062526  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.063955  446965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:50:53.065131  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:50:53.065153  446965 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:50:53.065173  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.065987  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066607  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.066640  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066723  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.066956  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.067102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.067254  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.068475  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.068916  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.068939  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.069098  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.069288  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.069457  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.069625  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.075920  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1030 19:50:53.076341  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.076758  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.076779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.077042  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.077238  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.078809  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.079065  446965 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.079088  446965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:50:53.079105  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.081873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082309  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.082339  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082515  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.082705  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.082863  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.083061  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.274313  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:50:53.305281  446965 node_ready.go:35] waiting up to 6m0s for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313184  446965 node_ready.go:49] node "embed-certs-042402" has status "Ready":"True"
	I1030 19:50:53.313217  446965 node_ready.go:38] duration metric: took 7.892097ms for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313230  446965 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:53.321668  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:50:53.406960  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.427287  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:50:53.427324  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:50:53.475089  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.485983  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:50:53.486013  446965 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:50:53.570871  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:53.570904  446965 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:50:53.670898  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:54.545328  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.138329529s)
	I1030 19:50:54.545384  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545383  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.070259573s)
	I1030 19:50:54.545399  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545426  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545445  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545732  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545748  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545757  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545761  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545765  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545787  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545794  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545802  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545808  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.546139  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546162  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.546465  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.546468  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546507  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.576380  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.576408  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.576738  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.576787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.576804  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.703670  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032714873s)
	I1030 19:50:54.703724  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.703736  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704025  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.704059  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704076  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704085  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.704104  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704350  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704362  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704374  446965 addons.go:475] Verifying addon metrics-server=true in "embed-certs-042402"
	I1030 19:50:54.706330  446965 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:50:51.833654  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.333879  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.707723  446965 addons.go:510] duration metric: took 1.694322523s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:50:55.328470  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:57.828224  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:56.832967  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:58.833284  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:59.828636  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:01.828151  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.828178  446965 pod_ready.go:82] duration metric: took 8.506481998s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.828187  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833094  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.833121  446965 pod_ready.go:82] duration metric: took 4.926401ms for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833133  446965 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837391  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.837410  446965 pod_ready.go:82] duration metric: took 4.27047ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837419  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344200  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.344224  446965 pod_ready.go:82] duration metric: took 506.798667ms for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344233  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349020  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.349042  446965 pod_ready.go:82] duration metric: took 4.801739ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349055  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626109  446965 pod_ready.go:93] pod "kube-proxy-m9zwz" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.626137  446965 pod_ready.go:82] duration metric: took 277.074567ms for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626146  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027456  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:03.027482  446965 pod_ready.go:82] duration metric: took 401.329277ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027493  446965 pod_ready.go:39] duration metric: took 9.714247169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:03.027513  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:03.027579  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:03.043403  446965 api_server.go:72] duration metric: took 10.030078869s to wait for apiserver process to appear ...
	I1030 19:51:03.043431  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:03.043456  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:51:03.048722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:51:03.049572  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:03.049595  446965 api_server.go:131] duration metric: took 6.156928ms to wait for apiserver health ...
	I1030 19:51:03.049603  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:03.233170  446965 system_pods.go:59] 9 kube-system pods found
	I1030 19:51:03.233205  446965 system_pods.go:61] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.233212  446965 system_pods.go:61] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.233217  446965 system_pods.go:61] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.233222  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.233227  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.233231  446965 system_pods.go:61] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.233236  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.233247  446965 system_pods.go:61] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.233255  446965 system_pods.go:61] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.233272  446965 system_pods.go:74] duration metric: took 183.660307ms to wait for pod list to return data ...
	I1030 19:51:03.233287  446965 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:03.427520  446965 default_sa.go:45] found service account: "default"
	I1030 19:51:03.427550  446965 default_sa.go:55] duration metric: took 194.254547ms for default service account to be created ...
	I1030 19:51:03.427562  446965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:03.629316  446965 system_pods.go:86] 9 kube-system pods found
	I1030 19:51:03.629351  446965 system_pods.go:89] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.629364  446965 system_pods.go:89] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.629370  446965 system_pods.go:89] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.629377  446965 system_pods.go:89] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.629381  446965 system_pods.go:89] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.629386  446965 system_pods.go:89] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.629391  446965 system_pods.go:89] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.629399  446965 system_pods.go:89] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.629405  446965 system_pods.go:89] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.629418  446965 system_pods.go:126] duration metric: took 201.847233ms to wait for k8s-apps to be running ...
	I1030 19:51:03.629432  446965 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:03.629486  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:03.649120  446965 system_svc.go:56] duration metric: took 19.675022ms WaitForService to wait for kubelet
	I1030 19:51:03.649166  446965 kubeadm.go:582] duration metric: took 10.635844977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:03.649192  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:03.826763  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:03.826790  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:03.826803  446965 node_conditions.go:105] duration metric: took 177.604616ms to run NodePressure ...
	I1030 19:51:03.826819  446965 start.go:241] waiting for startup goroutines ...
	I1030 19:51:03.826827  446965 start.go:246] waiting for cluster config update ...
	I1030 19:51:03.826841  446965 start.go:255] writing updated cluster config ...
	I1030 19:51:03.827126  446965 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:03.877974  446965 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:03.880121  446965 out.go:177] * Done! kubectl is now configured to use "embed-certs-042402" cluster and "default" namespace by default
	I1030 19:51:00.833673  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:03.333042  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:05.333431  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:07.833229  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:09.833772  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:10.833131  446736 pod_ready.go:82] duration metric: took 4m0.006526983s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:51:10.833166  446736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:51:10.833178  446736 pod_ready.go:39] duration metric: took 4m7.416690025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:10.833200  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:10.833239  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:10.833300  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:10.884016  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:10.884046  446736 cri.go:89] found id: ""
	I1030 19:51:10.884055  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:10.884108  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.888789  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:10.888857  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:10.931994  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:10.932037  446736 cri.go:89] found id: ""
	I1030 19:51:10.932047  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:10.932097  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.937113  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:10.937181  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:10.977951  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:10.977982  446736 cri.go:89] found id: ""
	I1030 19:51:10.977993  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:10.978050  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.982791  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:10.982863  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:11.021741  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.021770  446736 cri.go:89] found id: ""
	I1030 19:51:11.021780  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:11.021837  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.026590  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:11.026653  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:11.068839  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.068873  446736 cri.go:89] found id: ""
	I1030 19:51:11.068885  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:11.068946  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.073103  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:11.073171  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:11.108404  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.108432  446736 cri.go:89] found id: ""
	I1030 19:51:11.108443  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:11.108506  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.112903  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:11.112974  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:11.153767  446736 cri.go:89] found id: ""
	I1030 19:51:11.153800  446736 logs.go:282] 0 containers: []
	W1030 19:51:11.153812  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:11.153821  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:11.153892  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:11.194649  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.194681  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.194687  446736 cri.go:89] found id: ""
	I1030 19:51:11.194697  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:11.194770  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.199037  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.202957  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:11.202984  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:11.246187  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:11.246220  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.286608  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:11.286643  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.339119  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:11.339157  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.376624  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:11.376653  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.411401  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:11.411431  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:11.481668  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:11.481710  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:11.497767  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:11.497799  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:11.612001  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:11.612034  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:11.656553  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:11.656589  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:11.695387  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:11.695428  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.732386  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:11.732419  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:12.217007  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:12.217056  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:14.769155  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:14.787096  446736 api_server.go:72] duration metric: took 4m17.097569041s to wait for apiserver process to appear ...
	I1030 19:51:14.787128  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:14.787176  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:14.787235  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:14.823506  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:14.823533  446736 cri.go:89] found id: ""
	I1030 19:51:14.823541  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:14.823595  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.828125  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:14.828214  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:14.867890  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:14.867914  446736 cri.go:89] found id: ""
	I1030 19:51:14.867922  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:14.867970  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.873213  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:14.873283  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:14.913068  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:14.913103  446736 cri.go:89] found id: ""
	I1030 19:51:14.913114  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:14.913179  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.918380  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:14.918459  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:14.956150  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:14.956177  446736 cri.go:89] found id: ""
	I1030 19:51:14.956187  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:14.956294  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.960781  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:14.960836  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:15.001804  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.001833  446736 cri.go:89] found id: ""
	I1030 19:51:15.001844  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:15.001893  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.006341  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:15.006401  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:15.045202  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.045236  446736 cri.go:89] found id: ""
	I1030 19:51:15.045247  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:15.045326  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.051967  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:15.052031  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:15.091569  446736 cri.go:89] found id: ""
	I1030 19:51:15.091596  446736 logs.go:282] 0 containers: []
	W1030 19:51:15.091604  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:15.091611  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:15.091668  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:15.135521  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:15.135551  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:15.135557  446736 cri.go:89] found id: ""
	I1030 19:51:15.135567  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:15.135633  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.140215  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.145490  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:15.145514  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:15.205939  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:15.205972  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:15.240157  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:15.240194  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.277168  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:15.277200  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:15.708451  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:15.708499  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:15.750544  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:15.750577  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:15.820071  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:15.820113  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:15.870259  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:15.870293  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:15.919968  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:15.919998  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.976948  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:15.976992  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:16.014451  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:16.014498  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:16.047766  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:16.047806  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:16.070539  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:16.070567  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:18.677834  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:51:18.682862  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:51:18.684023  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:18.684046  446736 api_server.go:131] duration metric: took 3.896911154s to wait for apiserver health ...
	I1030 19:51:18.684055  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:18.684083  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:18.684130  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:18.724815  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:18.724848  446736 cri.go:89] found id: ""
	I1030 19:51:18.724860  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:18.724928  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.729332  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:18.729391  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:18.767614  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:18.767642  446736 cri.go:89] found id: ""
	I1030 19:51:18.767651  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:18.767705  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.772420  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:18.772525  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:18.811459  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:18.811489  446736 cri.go:89] found id: ""
	I1030 19:51:18.811501  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:18.811563  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.816844  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:18.816906  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:18.853273  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:18.853299  446736 cri.go:89] found id: ""
	I1030 19:51:18.853308  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:18.853362  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.857867  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:18.857946  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:18.907021  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:18.907052  446736 cri.go:89] found id: ""
	I1030 19:51:18.907063  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:18.907126  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.913432  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:18.913506  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:18.978047  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:18.978072  446736 cri.go:89] found id: ""
	I1030 19:51:18.978083  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:18.978150  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.983158  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:18.983241  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:19.018992  446736 cri.go:89] found id: ""
	I1030 19:51:19.019018  446736 logs.go:282] 0 containers: []
	W1030 19:51:19.019026  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:19.019035  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:19.019094  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:19.053821  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.053850  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.053855  446736 cri.go:89] found id: ""
	I1030 19:51:19.053862  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:19.053922  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.063575  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.069254  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:19.069283  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:19.139641  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:19.139700  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:19.198020  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:19.198059  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:19.239685  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:19.239727  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:19.281510  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:19.281545  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.317842  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:19.317872  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:19.659645  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:19.659697  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:19.678087  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:19.678121  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:19.778504  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:19.778540  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:19.826520  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:19.826552  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:19.863959  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:19.864011  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:19.915777  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:19.915814  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.953036  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:19.953069  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:22.502129  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:51:22.502162  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.502167  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.502172  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.502175  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.502179  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.502182  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.502188  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.502193  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.502201  446736 system_pods.go:74] duration metric: took 3.818141259s to wait for pod list to return data ...
	I1030 19:51:22.502209  446736 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:22.504541  446736 default_sa.go:45] found service account: "default"
	I1030 19:51:22.504562  446736 default_sa.go:55] duration metric: took 2.346763ms for default service account to be created ...
	I1030 19:51:22.504570  446736 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:22.509016  446736 system_pods.go:86] 8 kube-system pods found
	I1030 19:51:22.509039  446736 system_pods.go:89] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.509044  446736 system_pods.go:89] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.509048  446736 system_pods.go:89] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.509052  446736 system_pods.go:89] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.509055  446736 system_pods.go:89] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.509058  446736 system_pods.go:89] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.509101  446736 system_pods.go:89] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.509112  446736 system_pods.go:89] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.509119  446736 system_pods.go:126] duration metric: took 4.544102ms to wait for k8s-apps to be running ...
	I1030 19:51:22.509125  446736 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:22.509172  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:22.524883  446736 system_svc.go:56] duration metric: took 15.747977ms WaitForService to wait for kubelet
	I1030 19:51:22.524906  446736 kubeadm.go:582] duration metric: took 4m24.835384605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:22.524929  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:22.528315  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:22.528334  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:22.528345  446736 node_conditions.go:105] duration metric: took 3.411421ms to run NodePressure ...
	I1030 19:51:22.528357  446736 start.go:241] waiting for startup goroutines ...
	I1030 19:51:22.528364  446736 start.go:246] waiting for cluster config update ...
	I1030 19:51:22.528374  446736 start.go:255] writing updated cluster config ...
	I1030 19:51:22.528621  446736 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:22.577143  446736 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:22.580061  446736 out.go:177] * Done! kubectl is now configured to use "no-preload-960512" cluster and "default" namespace by default
	I1030 19:52:15.582907  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:52:15.583009  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:52:15.584345  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:15.584419  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:15.584522  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:15.584659  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:15.584763  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:15.584827  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:15.586931  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:15.587016  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:15.587074  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:15.587145  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:15.587198  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:15.587271  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:15.587339  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:15.587402  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:15.587455  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:15.587517  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:15.587577  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:15.587608  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:15.587682  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:15.587759  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:15.587846  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:15.587924  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:15.587988  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:15.588076  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:15.588148  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:15.588180  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:15.588267  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:15.589722  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:15.589834  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:15.589932  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:15.590014  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:15.590128  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:15.590285  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:15.590336  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:15.590388  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590560  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590642  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590842  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590946  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591155  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591253  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591513  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591609  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591841  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591855  447486 kubeadm.go:310] 
	I1030 19:52:15.591900  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:52:15.591956  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:52:15.591966  447486 kubeadm.go:310] 
	I1030 19:52:15.592008  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:52:15.592051  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:52:15.592192  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:52:15.592204  447486 kubeadm.go:310] 
	I1030 19:52:15.592318  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:52:15.592360  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:52:15.592391  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:52:15.592397  447486 kubeadm.go:310] 
	I1030 19:52:15.592511  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:52:15.592592  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:52:15.592600  447486 kubeadm.go:310] 
	I1030 19:52:15.592733  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:52:15.592850  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:52:15.592959  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:52:15.593059  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:52:15.593138  447486 kubeadm.go:310] 
	W1030 19:52:15.593236  447486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:52:15.593289  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:52:16.049810  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:52:16.065820  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:52:16.076166  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:52:16.076192  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:52:16.076241  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:52:16.085309  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:52:16.085380  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:52:16.094868  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:52:16.104343  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:52:16.104395  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:52:16.113939  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.122836  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:52:16.122885  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.132083  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:52:16.141441  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:52:16.141487  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:52:16.150710  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:52:16.222070  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:16.222183  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:16.366061  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:16.366194  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:16.366352  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:16.541086  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:16.543200  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:16.543303  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:16.543398  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:16.543523  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:16.543625  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:16.543749  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:16.543848  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:16.543942  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:16.544020  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:16.544096  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:16.544193  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:16.544252  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:16.544343  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:16.637454  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:16.829430  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:16.985259  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:17.072312  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:17.092511  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:17.093595  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:17.093654  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:17.228039  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:17.229647  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:17.229766  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:17.237333  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:17.239644  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:17.239774  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:17.241037  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:57.243167  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:57.243769  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:57.244072  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:02.244240  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:02.244563  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:12.244991  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:12.245293  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:32.246428  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:32.246697  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.247834  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:54:12.248150  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.248173  447486 kubeadm.go:310] 
	I1030 19:54:12.248226  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:54:12.248308  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:54:12.248336  447486 kubeadm.go:310] 
	I1030 19:54:12.248386  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:54:12.248449  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:54:12.248598  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:54:12.248609  447486 kubeadm.go:310] 
	I1030 19:54:12.248747  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:54:12.248811  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:54:12.248867  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:54:12.248876  447486 kubeadm.go:310] 
	I1030 19:54:12.249013  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:54:12.249111  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:54:12.249129  447486 kubeadm.go:310] 
	I1030 19:54:12.249280  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:54:12.249447  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:54:12.249564  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:54:12.249662  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:54:12.249708  447486 kubeadm.go:310] 
	I1030 19:54:12.249878  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:54:12.250015  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:54:12.250208  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:54:12.250221  447486 kubeadm.go:394] duration metric: took 7m57.874179721s to StartCluster
	I1030 19:54:12.250311  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:54:12.250399  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:54:12.292692  447486 cri.go:89] found id: ""
	I1030 19:54:12.292749  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.292760  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:54:12.292770  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:54:12.292840  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:54:12.329792  447486 cri.go:89] found id: ""
	I1030 19:54:12.329825  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.329835  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:54:12.329843  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:54:12.329905  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:54:12.364661  447486 cri.go:89] found id: ""
	I1030 19:54:12.364693  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.364702  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:54:12.364709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:54:12.364764  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:54:12.400842  447486 cri.go:89] found id: ""
	I1030 19:54:12.400870  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.400878  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:54:12.400885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:54:12.400943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:54:12.440135  447486 cri.go:89] found id: ""
	I1030 19:54:12.440164  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.440172  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:54:12.440178  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:54:12.440228  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:54:12.476365  447486 cri.go:89] found id: ""
	I1030 19:54:12.476403  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.476416  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:54:12.476425  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:54:12.476503  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:54:12.519669  447486 cri.go:89] found id: ""
	I1030 19:54:12.519702  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.519715  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:54:12.519724  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:54:12.519791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:54:12.554180  447486 cri.go:89] found id: ""
	I1030 19:54:12.554218  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.554230  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:54:12.554244  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:54:12.554261  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:54:12.669617  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:54:12.669660  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:54:12.708361  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:54:12.708392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:54:12.763103  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:54:12.763145  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:54:12.778676  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:54:12.778712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:54:12.865694  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1030 19:54:12.865732  447486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:54:12.865797  447486 out.go:270] * 
	W1030 19:54:12.865908  447486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.865929  447486 out.go:270] * 
	W1030 19:54:12.867124  447486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:54:12.871111  447486 out.go:201] 
	W1030 19:54:12.872534  447486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.872591  447486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:54:12.872616  447486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:54:12.874145  447486 out.go:201] 
	
	
	==> CRI-O <==
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.947754572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318348947720763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=496f36e8-45b7-4e73-b4f0-3212b15eab89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.948512343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bf16ce9-451a-4a2a-afcf-bcf36729de3a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.948583902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bf16ce9-451a-4a2a-afcf-bcf36729de3a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.948782019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bf16ce9-451a-4a2a-afcf-bcf36729de3a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.985562107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64518659-275f-4852-9865-a3d6a9852984 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.985649464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64518659-275f-4852-9865-a3d6a9852984 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.987863192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d20a2f56-13db-483c-b6c6-b60921c739b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.988403585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318348988371961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d20a2f56-13db-483c-b6c6-b60921c739b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.989140338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=843f9bcd-fd86-41a1-9b64-ddf3bab4550b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.989193605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=843f9bcd-fd86-41a1-9b64-ddf3bab4550b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:08 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:08.989370911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=843f9bcd-fd86-41a1-9b64-ddf3bab4550b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.024741499Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9558d7aa-71f7-422d-9e27-8ffef6d36d05 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.024835567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9558d7aa-71f7-422d-9e27-8ffef6d36d05 name=/runtime.v1.RuntimeService/Version
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.025988410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69307adf-4915-4643-b065-7c04b2389670 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.026732024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318349026703900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69307adf-4915-4643-b065-7c04b2389670 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.027357510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=477b3ed1-1a5a-4fa0-957b-43bd3582fd35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.027451702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=477b3ed1-1a5a-4fa0-957b-43bd3582fd35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.027736133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=477b3ed1-1a5a-4fa0-957b-43bd3582fd35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.063328267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=604e21b0-dd91-408b-b6e2-966c4954b79a name=/runtime.v1.RuntimeService/Version
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.063412220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=604e21b0-dd91-408b-b6e2-966c4954b79a name=/runtime.v1.RuntimeService/Version
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.065001719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2272dea-72c6-4712-a228-477a7dc2aa9f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.065597897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318349065573359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2272dea-72c6-4712-a228-477a7dc2aa9f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.066267943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4a943ed-5fa0-4119-96ee-c6244d57d34a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.066333902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4a943ed-5fa0-4119-96ee-c6244d57d34a name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 19:59:09 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 19:59:09.066523981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4a943ed-5fa0-4119-96ee-c6244d57d34a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60f936bfa2bb3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   9675a30e34cc5       storage-provisioner
	d9feb95ef951b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b819629c91bdf       busybox
	87e42814a8c59       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   23db14501b34e       coredns-7c65d6cfc9-9w8m8
	8bb328b44b95e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   9675a30e34cc5       storage-provisioner
	2ce5d5edb0018       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   c8cbceb7ff00c       kube-proxy-tsr5q
	0b3881e5bd442       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   0c31889e0a4ba       kube-scheduler-default-k8s-diff-port-768989
	a1c527b45070a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   c821469f94d41       etcd-default-k8s-diff-port-768989
	ef19f5c9edef4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   3b907e7fb753f       kube-controller-manager-default-k8s-diff-port-768989
	549c7d9c0a8b5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   1af13b544ca5a       kube-apiserver-default-k8s-diff-port-768989
	
	
	==> coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35848 - 31406 "HINFO IN 707585907035877535.584610179630346385. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.012564224s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-768989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-768989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=default-k8s-diff-port-768989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T19_37_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 19:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-768989
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 19:59:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 19:56:19 +0000   Wed, 30 Oct 2024 19:37:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 19:56:19 +0000   Wed, 30 Oct 2024 19:37:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 19:56:19 +0000   Wed, 30 Oct 2024 19:37:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 19:56:19 +0000   Wed, 30 Oct 2024 19:45:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    default-k8s-diff-port-768989
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 59288b73c6724ec2bc5220c45d441063
	  System UUID:                59288b73-c672-4ec2-bc52-20c45d441063
	  Boot ID:                    d059d30a-cab2-4b0e-b3ca-96f6413350b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-9w8m8                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-768989                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-768989             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-768989    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-tsr5q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-768989             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-t85rd                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-768989 event: Registered Node default-k8s-diff-port-768989 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-768989 event: Registered Node default-k8s-diff-port-768989 in Controller
	
	
	==> dmesg <==
	[Oct30 19:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000005] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051060] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040306] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.862411] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.429388] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.472505] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.572335] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.056368] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064755] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.184162] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.124119] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.293098] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.221865] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +1.949113] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +0.057016] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.512540] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.514554] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +3.214803] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.351216] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] <==
	{"level":"info","ts":"2024-10-30T19:45:36.449999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-30T19:45:36.450516Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-30T19:45:36.450578Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-30T19:45:52.676269Z","caller":"traceutil/trace.go:171","msg":"trace[2124873804] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"266.788308ms","start":"2024-10-30T19:45:52.409467Z","end":"2024-10-30T19:45:52.676255Z","steps":["trace[2124873804] 'read index received'  (duration: 266.582009ms)","trace[2124873804] 'applied index is now lower than readState.Index'  (duration: 205.735µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-30T19:45:52.676341Z","caller":"traceutil/trace.go:171","msg":"trace[1502996149] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"282.964685ms","start":"2024-10-30T19:45:52.393358Z","end":"2024-10-30T19:45:52.676323Z","steps":["trace[1502996149] 'process raft request'  (duration: 282.798192ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T19:45:52.676515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.012225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-768989\" ","response":"range_response_count:1 size:6850"}
	{"level":"info","ts":"2024-10-30T19:45:52.676609Z","caller":"traceutil/trace.go:171","msg":"trace[356502950] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-768989; range_end:; response_count:1; response_revision:615; }","duration":"267.137634ms","start":"2024-10-30T19:45:52.409462Z","end":"2024-10-30T19:45:52.676600Z","steps":["trace[356502950] 'agreement among raft nodes before linearized reading'  (duration: 266.848202ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T19:45:54.146344Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"439.537651ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T19:45:54.147806Z","caller":"traceutil/trace.go:171","msg":"trace[121692673] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:615; }","duration":"441.01219ms","start":"2024-10-30T19:45:53.706778Z","end":"2024-10-30T19:45:54.147790Z","steps":["trace[121692673] 'range keys from in-memory index tree'  (duration: 439.526525ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T19:45:54.149493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"748.062978ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11042143347694633640 > lease_revoke:<id:193d92deeeec5698>","response":"size:29"}
	{"level":"info","ts":"2024-10-30T19:45:54.151205Z","caller":"traceutil/trace.go:171","msg":"trace[1677619198] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"745.556174ms","start":"2024-10-30T19:45:53.405641Z","end":"2024-10-30T19:45:54.151197Z","steps":["trace[1677619198] 'process raft request'  (duration: 745.431964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T19:45:54.151605Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T19:45:53.405622Z","time spent":"745.770073ms","remote":"127.0.0.1:51586","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":699,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.180352a5af25a734\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/busybox.180352a5af25a734\" value_size:632 lease:1818771310839857566 >> failure:<>"}
	{"level":"info","ts":"2024-10-30T19:45:54.151270Z","caller":"traceutil/trace.go:171","msg":"trace[1302764961] linearizableReadLoop","detail":"{readStateIndex:652; appliedIndex:650; }","duration":"1.246099969s","start":"2024-10-30T19:45:52.905163Z","end":"2024-10-30T19:45:54.151263Z","steps":["trace[1302764961] 'read index received'  (duration: 496.186356ms)","trace[1302764961] 'applied index is now lower than readState.Index'  (duration: 749.912501ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-30T19:45:54.151369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.246197518s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-768989\" ","response":"range_response_count:1 size:6850"}
	{"level":"info","ts":"2024-10-30T19:45:54.151850Z","caller":"traceutil/trace.go:171","msg":"trace[1012351568] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-768989; range_end:; response_count:1; response_revision:616; }","duration":"1.246683399s","start":"2024-10-30T19:45:52.905158Z","end":"2024-10-30T19:45:54.151841Z","steps":["trace[1012351568] 'agreement among raft nodes before linearized reading'  (duration: 1.24612734s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T19:45:54.151955Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T19:45:52.905040Z","time spent":"1.24690238s","remote":"127.0.0.1:51704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":6874,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-768989\" "}
	{"level":"warn","ts":"2024-10-30T19:45:54.152164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"925.367126ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T19:45:54.152571Z","caller":"traceutil/trace.go:171","msg":"trace[2142638365] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:616; }","duration":"925.775875ms","start":"2024-10-30T19:45:53.226787Z","end":"2024-10-30T19:45:54.152563Z","steps":["trace[2142638365] 'agreement among raft nodes before linearized reading'  (duration: 925.354847ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T19:45:54.152749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T19:45:53.226743Z","time spent":"925.996546ms","remote":"127.0.0.1:51488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-30T19:46:11.696225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.633268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-t85rd\" ","response":"range_response_count:1 size:4394"}
	{"level":"info","ts":"2024-10-30T19:46:11.696400Z","caller":"traceutil/trace.go:171","msg":"trace[135715792] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-t85rd; range_end:; response_count:1; response_revision:638; }","duration":"271.857721ms","start":"2024-10-30T19:46:11.424526Z","end":"2024-10-30T19:46:11.696384Z","steps":["trace[135715792] 'range keys from in-memory index tree'  (duration: 271.497215ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T19:46:43.482520Z","caller":"traceutil/trace.go:171","msg":"trace[2067356973] transaction","detail":"{read_only:false; response_revision:664; number_of_response:1; }","duration":"170.245585ms","start":"2024-10-30T19:46:43.312242Z","end":"2024-10-30T19:46:43.482487Z","steps":["trace[2067356973] 'process raft request'  (duration: 169.819555ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T19:55:36.503501Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":862}
	{"level":"info","ts":"2024-10-30T19:55:36.514170Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":862,"took":"9.648473ms","hash":3742871559,"current-db-size-bytes":2813952,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2813952,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-10-30T19:55:36.514259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3742871559,"revision":862,"compact-revision":-1}
	
	
	==> kernel <==
	 19:59:09 up 13 min,  0 users,  load average: 0.28, 0.15, 0.10
	Linux default-k8s-diff-port-768989 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 19:55:38.883972       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:55:38.884004       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 19:55:38.885022       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:55:38.885238       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 19:56:38.885783       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:56:38.886041       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1030 19:56:38.886214       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:56:38.886276       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1030 19:56:38.887316       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:56:38.887396       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 19:58:38.888274       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:58:38.888450       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 19:58:38.888530       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:58:38.888552       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 19:58:38.889580       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:58:38.889687       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] <==
	E1030 19:53:41.374302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:53:41.950351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:54:11.380021       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:54:11.957903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:54:41.386755       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:54:41.966558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:55:11.393256       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:55:11.974822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:55:41.399629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:55:41.982175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:56:11.406940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:56:11.989306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 19:56:19.176995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-768989"
	E1030 19:56:41.413350       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:56:41.996705       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 19:56:53.387415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="348.817µs"
	I1030 19:57:04.379481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="110.511µs"
	E1030 19:57:11.420800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:57:12.006562       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:57:41.429812       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:57:42.013939       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:58:11.436028       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:58:12.021430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:58:41.442488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:58:42.030339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 19:45:39.071059       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 19:45:39.079402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	E1030 19:45:39.079478       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 19:45:39.114435       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 19:45:39.114476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 19:45:39.114504       1 server_linux.go:169] "Using iptables Proxier"
	I1030 19:45:39.116747       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 19:45:39.116953       1 server.go:483] "Version info" version="v1.31.2"
	I1030 19:45:39.116978       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:45:39.118385       1 config.go:199] "Starting service config controller"
	I1030 19:45:39.118567       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 19:45:39.118643       1 config.go:105] "Starting endpoint slice config controller"
	I1030 19:45:39.118665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 19:45:39.119250       1 config.go:328] "Starting node config controller"
	I1030 19:45:39.119279       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 19:45:39.218800       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 19:45:39.218862       1 shared_informer.go:320] Caches are synced for service config
	I1030 19:45:39.219401       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] <==
	I1030 19:45:35.719446       1 serving.go:386] Generated self-signed cert in-memory
	W1030 19:45:37.850131       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1030 19:45:37.850421       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 19:45:37.850508       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1030 19:45:37.850533       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1030 19:45:37.880620       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1030 19:45:37.880795       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:45:37.883348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1030 19:45:37.883401       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 19:45:37.883938       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1030 19:45:37.883966       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1030 19:45:37.984209       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 19:57:59 default-k8s-diff-port-768989 kubelet[927]: E1030 19:57:59.366987     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 19:58:03 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:03.551194     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318283550567408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:03 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:03.551266     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318283550567408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:12 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:12.366214     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 19:58:13 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:13.552791     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318293552466338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:13 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:13.552815     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318293552466338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:23 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:23.554425     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318303554045529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:23 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:23.554793     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318303554045529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:27 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:27.366275     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 19:58:33 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:33.384561     927 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 19:58:33 default-k8s-diff-port-768989 kubelet[927]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 19:58:33 default-k8s-diff-port-768989 kubelet[927]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 19:58:33 default-k8s-diff-port-768989 kubelet[927]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 19:58:33 default-k8s-diff-port-768989 kubelet[927]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 19:58:33 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:33.556839     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318313556540090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:33 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:33.556862     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318313556540090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:42 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:42.366507     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 19:58:43 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:43.559460     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318323558795043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:43 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:43.559504     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318323558795043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:53 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:53.560949     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318333560590371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:53 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:53.561508     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318333560590371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:56 default-k8s-diff-port-768989 kubelet[927]: E1030 19:58:56.366235     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 19:59:03 default-k8s-diff-port-768989 kubelet[927]: E1030 19:59:03.563913     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318343563393535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:03 default-k8s-diff-port-768989 kubelet[927]: E1030 19:59:03.564240     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318343563393535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:08 default-k8s-diff-port-768989 kubelet[927]: E1030 19:59:08.366435     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	
	
	==> storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] <==
	I1030 19:46:09.700784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 19:46:09.716404       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 19:46:09.716515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 19:46:27.121344       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 19:46:27.121626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-768989_d5517a10-acd3-49e0-9347-34a26e082a72!
	I1030 19:46:27.124572       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f3841af7-4910-4982-8166-6a6276fded3a", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-768989_d5517a10-acd3-49e0-9347-34a26e082a72 became leader
	I1030 19:46:27.222974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-768989_d5517a10-acd3-49e0-9347-34a26e082a72!
	
	
	==> storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] <==
	I1030 19:45:38.938047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1030 19:46:08.944041       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-t85rd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 describe pod metrics-server-6867b74b74-t85rd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-768989 describe pod metrics-server-6867b74b74-t85rd: exit status 1 (78.654314ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-t85rd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-768989 describe pod metrics-server-6867b74b74-t85rd: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1030 19:51:11.429889  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-042402 -n embed-certs-042402
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-30 20:00:04.418814174 +0000 UTC m=+5961.165997721
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-042402 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-042402 logs -n 25: (2.095363106s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-534248 sudo cat                              | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo find                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo crio                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-534248                                       | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:42:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:42:11.799298  447486 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:42:11.799434  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799444  447486 out.go:358] Setting ErrFile to fd 2...
	I1030 19:42:11.799448  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799628  447486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:42:11.800193  447486 out.go:352] Setting JSON to false
	I1030 19:42:11.801205  447486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12275,"bootTime":1730305057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:42:11.801318  447486 start.go:139] virtualization: kvm guest
	I1030 19:42:11.803677  447486 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:42:11.805274  447486 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:42:11.805300  447486 notify.go:220] Checking for updates...
	I1030 19:42:11.808043  447486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:42:11.809440  447486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:42:11.810604  447486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:42:11.811774  447486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:42:11.812958  447486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:42:11.814552  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:42:11.814994  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.815077  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.830315  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1030 19:42:11.830795  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.831345  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.831365  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.831692  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.831869  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.833718  447486 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:42:11.835019  447486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:42:11.835371  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.835416  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.850097  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1030 19:42:11.850532  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.850964  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.850978  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.851321  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.851541  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.886920  447486 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:42:11.888376  447486 start.go:297] selected driver: kvm2
	I1030 19:42:11.888392  447486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.888538  447486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:42:11.889472  447486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.889560  447486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:42:11.904007  447486 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:42:11.904405  447486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:42:11.904443  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:42:11.904494  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:42:11.904549  447486 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.904661  447486 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.907302  447486 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:42:10.622770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:11.908430  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:42:11.908474  447486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:42:11.908485  447486 cache.go:56] Caching tarball of preloaded images
	I1030 19:42:11.908564  447486 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:42:11.908575  447486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:42:11.908666  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:42:11.908832  447486 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:42:16.702732  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:19.774825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:25.854777  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:28.926846  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:35.006934  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:38.078752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:44.158848  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:47.230843  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:53.310763  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:56.382772  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:02.462818  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:05.534754  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:11.614801  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:14.686762  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:20.766767  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:23.838853  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:29.918782  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:32.990752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:39.070771  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:42.142716  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:48.222814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:51.294775  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:57.374780  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:00.446825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:06.526810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:09.598813  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:15.678770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:18.750751  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:24.830814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:27.902810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:33.982759  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:37.054791  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:43.134706  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:46.206802  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:52.286830  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:55.358809  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:01.438753  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:04.510854  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:07.515699  446887 start.go:364] duration metric: took 4m29.000646378s to acquireMachinesLock for "default-k8s-diff-port-768989"
	I1030 19:45:07.515764  446887 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:07.515773  446887 fix.go:54] fixHost starting: 
	I1030 19:45:07.516191  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:07.516238  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:07.532374  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I1030 19:45:07.532907  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:07.533433  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:07.533459  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:07.533790  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:07.534016  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:07.534220  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:07.535802  446887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-768989: state=Stopped err=<nil>
	I1030 19:45:07.535842  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	W1030 19:45:07.536016  446887 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:07.537809  446887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-768989" ...
	I1030 19:45:07.539184  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Start
	I1030 19:45:07.539361  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring networks are active...
	I1030 19:45:07.540025  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network default is active
	I1030 19:45:07.540408  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network mk-default-k8s-diff-port-768989 is active
	I1030 19:45:07.540867  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Getting domain xml...
	I1030 19:45:07.541489  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Creating domain...
	I1030 19:45:07.512810  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:07.512848  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513191  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:45:07.513223  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513458  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:45:07.515538  446736 machine.go:96] duration metric: took 4m37.420773403s to provisionDockerMachine
	I1030 19:45:07.515594  446736 fix.go:56] duration metric: took 4m37.443968478s for fixHost
	I1030 19:45:07.515600  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 4m37.443992524s
	W1030 19:45:07.515625  446736 start.go:714] error starting host: provision: host is not running
	W1030 19:45:07.515753  446736 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1030 19:45:07.515763  446736 start.go:729] Will try again in 5 seconds ...
	I1030 19:45:08.756310  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting to get IP...
	I1030 19:45:08.757242  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757624  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757747  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.757629  448092 retry.go:31] will retry after 202.103853ms: waiting for machine to come up
	I1030 19:45:08.961147  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961660  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961685  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.961606  448092 retry.go:31] will retry after 243.456761ms: waiting for machine to come up
	I1030 19:45:09.207134  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207539  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207582  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.207493  448092 retry.go:31] will retry after 375.017051ms: waiting for machine to come up
	I1030 19:45:09.584058  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584428  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.584373  448092 retry.go:31] will retry after 552.476692ms: waiting for machine to come up
	I1030 19:45:10.137989  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138421  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.138358  448092 retry.go:31] will retry after 560.865483ms: waiting for machine to come up
	I1030 19:45:10.700603  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700968  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.700920  448092 retry.go:31] will retry after 680.400693ms: waiting for machine to come up
	I1030 19:45:11.382861  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383336  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383362  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:11.383274  448092 retry.go:31] will retry after 787.136113ms: waiting for machine to come up
	I1030 19:45:12.171550  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171910  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171938  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:12.171853  448092 retry.go:31] will retry after 1.176474969s: waiting for machine to come up
	I1030 19:45:13.349617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350080  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350114  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:13.350042  448092 retry.go:31] will retry after 1.211573437s: waiting for machine to come up
	I1030 19:45:12.517265  446736 start.go:360] acquireMachinesLock for no-preload-960512: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:45:14.563397  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563805  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:14.563749  448092 retry.go:31] will retry after 1.625938777s: waiting for machine to come up
	I1030 19:45:16.191798  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192226  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192255  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:16.192188  448092 retry.go:31] will retry after 2.442949682s: waiting for machine to come up
	I1030 19:45:18.636342  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636768  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636812  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:18.636748  448092 retry.go:31] will retry after 2.48415211s: waiting for machine to come up
	I1030 19:45:21.124407  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124892  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124919  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:21.124843  448092 retry.go:31] will retry after 3.392637796s: waiting for machine to come up
	I1030 19:45:25.815539  446965 start.go:364] duration metric: took 4m42.694254153s to acquireMachinesLock for "embed-certs-042402"
	I1030 19:45:25.815623  446965 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:25.815635  446965 fix.go:54] fixHost starting: 
	I1030 19:45:25.816068  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:25.816232  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:25.833218  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 19:45:25.833610  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:25.834159  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:45:25.834191  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:25.834567  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:25.834777  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:25.834920  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:45:25.836507  446965 fix.go:112] recreateIfNeeded on embed-certs-042402: state=Stopped err=<nil>
	I1030 19:45:25.836532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	W1030 19:45:25.836711  446965 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:25.839078  446965 out.go:177] * Restarting existing kvm2 VM for "embed-certs-042402" ...
	I1030 19:45:24.519725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520072  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Found IP for machine: 192.168.39.92
	I1030 19:45:24.520091  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserving static IP address...
	I1030 19:45:24.520113  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has current primary IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520507  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.520521  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserved static IP address: 192.168.39.92
	I1030 19:45:24.520535  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | skip adding static IP to network mk-default-k8s-diff-port-768989 - found existing host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"}
	I1030 19:45:24.520545  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for SSH to be available...
	I1030 19:45:24.520560  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Getting to WaitForSSH function...
	I1030 19:45:24.522776  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523095  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.523127  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523209  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH client type: external
	I1030 19:45:24.523229  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa (-rw-------)
	I1030 19:45:24.523262  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:24.523283  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | About to run SSH command:
	I1030 19:45:24.523298  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | exit 0
	I1030 19:45:24.646297  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:24.646826  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetConfigRaw
	I1030 19:45:24.647589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:24.650093  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650532  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.650564  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650790  446887 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/config.json ...
	I1030 19:45:24.650984  446887 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:24.651005  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:24.651232  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.653396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653751  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.653781  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.654084  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654263  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.654677  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.654922  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.654935  446887 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:24.762586  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:24.762621  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.762898  446887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-768989"
	I1030 19:45:24.762936  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.763250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.765937  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766265  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.766289  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766398  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.766599  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766762  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766920  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.767087  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.767257  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.767269  446887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-768989 && echo "default-k8s-diff-port-768989" | sudo tee /etc/hostname
	I1030 19:45:24.888742  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-768989
	
	I1030 19:45:24.888771  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.891326  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891638  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.891691  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891804  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.892018  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892154  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892281  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.892498  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.892692  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.892716  446887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-768989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-768989/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-768989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:25.012173  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:25.012214  446887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:25.012240  446887 buildroot.go:174] setting up certificates
	I1030 19:45:25.012250  446887 provision.go:84] configureAuth start
	I1030 19:45:25.012280  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:25.012598  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.015106  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015430  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.015458  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.017810  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018099  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.018136  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018230  446887 provision.go:143] copyHostCerts
	I1030 19:45:25.018322  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:25.018334  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:25.018401  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:25.018553  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:25.018566  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:25.018634  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:25.018716  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:25.018724  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:25.018748  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:25.018798  446887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-768989 san=[127.0.0.1 192.168.39.92 default-k8s-diff-port-768989 localhost minikube]
	I1030 19:45:25.188186  446887 provision.go:177] copyRemoteCerts
	I1030 19:45:25.188246  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:25.188285  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.190995  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.191344  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191525  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.191718  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.191875  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.191991  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.277273  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1030 19:45:25.300302  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:45:25.322919  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:25.347214  446887 provision.go:87] duration metric: took 334.947897ms to configureAuth
	I1030 19:45:25.347246  446887 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:25.347432  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:25.347510  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.349988  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350294  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.350324  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350500  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.350704  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.350836  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.351015  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.351210  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.351421  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.351436  446887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:25.576481  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:25.576509  446887 machine.go:96] duration metric: took 925.509257ms to provisionDockerMachine
	I1030 19:45:25.576525  446887 start.go:293] postStartSetup for "default-k8s-diff-port-768989" (driver="kvm2")
	I1030 19:45:25.576562  446887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:25.576589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.576923  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:25.576951  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.579498  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579825  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.579841  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579980  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.580151  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.580320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.580453  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.665032  446887 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:25.669402  446887 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:25.669430  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:25.669500  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:25.669573  446887 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:25.669665  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:25.679070  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:25.703131  446887 start.go:296] duration metric: took 126.586543ms for postStartSetup
	I1030 19:45:25.703194  446887 fix.go:56] duration metric: took 18.187420989s for fixHost
	I1030 19:45:25.703217  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.705911  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706365  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.706396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706609  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.706800  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.706944  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.707052  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.707188  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.707428  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.707443  446887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:25.815370  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317525.786848764
	
	I1030 19:45:25.815406  446887 fix.go:216] guest clock: 1730317525.786848764
	I1030 19:45:25.815414  446887 fix.go:229] Guest: 2024-10-30 19:45:25.786848764 +0000 UTC Remote: 2024-10-30 19:45:25.703198163 +0000 UTC m=+287.327380555 (delta=83.650601ms)
	I1030 19:45:25.815439  446887 fix.go:200] guest clock delta is within tolerance: 83.650601ms
	I1030 19:45:25.815445  446887 start.go:83] releasing machines lock for "default-k8s-diff-port-768989", held for 18.299702226s
	I1030 19:45:25.815467  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.815737  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.818508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818851  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.818889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818987  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819477  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819671  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819808  446887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:25.819862  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.819900  446887 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:25.819930  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.822372  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.822754  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822774  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822887  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823109  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.823168  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.823330  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823429  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823506  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.823605  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823758  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823880  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.903488  446887 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:25.931046  446887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:26.077178  446887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:26.084282  446887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:26.084358  446887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:26.100869  446887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:26.100893  446887 start.go:495] detecting cgroup driver to use...
	I1030 19:45:26.100984  446887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:26.117006  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:26.130102  446887 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:26.130184  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:26.148540  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:26.163003  446887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:26.286433  446887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:26.444862  446887 docker.go:233] disabling docker service ...
	I1030 19:45:26.444931  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:26.460606  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:26.477159  446887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:26.600212  446887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:26.725587  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:26.741934  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:26.761815  446887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:26.761872  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.772368  446887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:26.772422  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.784279  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.795403  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.806323  446887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:26.821929  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.836574  446887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.857305  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.868135  446887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:26.878058  446887 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:26.878138  446887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:26.891979  446887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:26.902181  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:27.021858  446887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:27.118890  446887 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:27.118985  446887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:27.125407  446887 start.go:563] Will wait 60s for crictl version
	I1030 19:45:27.125472  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:45:27.129507  446887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:27.176630  446887 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:27.176739  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.205818  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.236431  446887 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:25.840689  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Start
	I1030 19:45:25.840860  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring networks are active...
	I1030 19:45:25.841604  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network default is active
	I1030 19:45:25.841928  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network mk-embed-certs-042402 is active
	I1030 19:45:25.842443  446965 main.go:141] libmachine: (embed-certs-042402) Getting domain xml...
	I1030 19:45:25.843267  446965 main.go:141] libmachine: (embed-certs-042402) Creating domain...
	I1030 19:45:27.094878  446965 main.go:141] libmachine: (embed-certs-042402) Waiting to get IP...
	I1030 19:45:27.095705  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.096101  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.096166  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.096079  448226 retry.go:31] will retry after 190.217394ms: waiting for machine to come up
	I1030 19:45:27.287473  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.287940  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.287966  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.287899  448226 retry.go:31] will retry after 365.943545ms: waiting for machine to come up
	I1030 19:45:27.655952  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.656374  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.656425  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.656343  448226 retry.go:31] will retry after 345.369581ms: waiting for machine to come up
	I1030 19:45:28.003856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.004367  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.004398  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.004319  448226 retry.go:31] will retry after 609.6218ms: waiting for machine to come up
	I1030 19:45:27.237629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:27.240387  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240733  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:27.240779  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240995  446887 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:27.245263  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:27.261305  446887 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:27.261440  446887 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:27.261489  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:27.301593  446887 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:27.301650  446887 ssh_runner.go:195] Run: which lz4
	I1030 19:45:27.305829  446887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:27.310384  446887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:27.310413  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:28.615219  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.615769  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.615795  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.615716  448226 retry.go:31] will retry after 672.090411ms: waiting for machine to come up
	I1030 19:45:29.289646  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:29.290179  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:29.290216  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:29.290105  448226 retry.go:31] will retry after 865.239242ms: waiting for machine to come up
	I1030 19:45:30.157223  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.157650  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.157679  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.157616  448226 retry.go:31] will retry after 833.557181ms: waiting for machine to come up
	I1030 19:45:30.993139  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.993663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.993720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.993625  448226 retry.go:31] will retry after 989.333841ms: waiting for machine to come up
	I1030 19:45:31.983978  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:31.984498  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:31.984546  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:31.984443  448226 retry.go:31] will retry after 1.534311856s: waiting for machine to come up
	I1030 19:45:28.730765  446887 crio.go:462] duration metric: took 1.424975563s to copy over tarball
	I1030 19:45:28.730868  446887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:30.907494  446887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1765829s)
	I1030 19:45:30.907536  446887 crio.go:469] duration metric: took 2.176738354s to extract the tarball
	I1030 19:45:30.907546  446887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:30.944242  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:30.986812  446887 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:30.986839  446887 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:30.986872  446887 kubeadm.go:934] updating node { 192.168.39.92 8444 v1.31.2 crio true true} ...
	I1030 19:45:30.987042  446887 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-768989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:30.987145  446887 ssh_runner.go:195] Run: crio config
	I1030 19:45:31.037466  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:31.037496  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:31.037511  446887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:31.037544  446887 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-768989 NodeName:default-k8s-diff-port-768989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:31.037735  446887 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-768989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:31.037815  446887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:31.047808  446887 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:31.047885  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:31.057074  446887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1030 19:45:31.073022  446887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:31.088919  446887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1030 19:45:31.105357  446887 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:31.109207  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:31.121329  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:31.234078  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:31.251028  446887 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989 for IP: 192.168.39.92
	I1030 19:45:31.251057  446887 certs.go:194] generating shared ca certs ...
	I1030 19:45:31.251080  446887 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:31.251287  446887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:31.251342  446887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:31.251354  446887 certs.go:256] generating profile certs ...
	I1030 19:45:31.251480  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/client.key
	I1030 19:45:31.251567  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key.eeeafde8
	I1030 19:45:31.251620  446887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key
	I1030 19:45:31.251788  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:31.251834  446887 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:31.251848  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:31.251888  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:31.251931  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:31.251963  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:31.252024  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:31.253127  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:31.293822  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:31.334804  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:31.366955  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:31.396042  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 19:45:31.428748  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1030 19:45:31.452866  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:31.476407  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:45:31.500375  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:31.523909  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:31.547532  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:31.571163  446887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:31.587969  446887 ssh_runner.go:195] Run: openssl version
	I1030 19:45:31.593866  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:31.604538  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609348  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609419  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.615446  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:31.626640  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:31.640948  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646702  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646751  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.654365  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:31.668538  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:31.679201  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683631  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683693  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.689362  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:31.699804  446887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:31.704445  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:31.710558  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:31.718563  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:31.724745  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:31.731125  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:31.736828  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:31.742434  446887 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:31.742604  446887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:31.742654  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.779319  446887 cri.go:89] found id: ""
	I1030 19:45:31.779416  446887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:31.789556  446887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:31.789576  446887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:31.789622  446887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:31.799817  446887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:31.800824  446887 kubeconfig.go:125] found "default-k8s-diff-port-768989" server: "https://192.168.39.92:8444"
	I1030 19:45:31.803207  446887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:31.812876  446887 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I1030 19:45:31.812909  446887 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:31.812924  446887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:31.812984  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.858070  446887 cri.go:89] found id: ""
	I1030 19:45:31.858174  446887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:31.874923  446887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:31.885243  446887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:31.885275  446887 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:31.885321  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1030 19:45:31.894394  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:31.894453  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:31.903760  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1030 19:45:31.912344  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:31.912410  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:31.921458  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.930426  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:31.930499  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.940008  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1030 19:45:31.949578  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:31.949645  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:31.959022  446887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:31.968457  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.069017  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.985574  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.191887  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.273266  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.400584  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:33.400686  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:33.520596  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:33.521020  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:33.521041  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:33.520992  448226 retry.go:31] will retry after 1.787777673s: waiting for machine to come up
	I1030 19:45:35.310399  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:35.310878  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:35.310906  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:35.310833  448226 retry.go:31] will retry after 2.264310439s: waiting for machine to come up
	I1030 19:45:37.577787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:37.578276  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:37.578310  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:37.578214  448226 retry.go:31] will retry after 2.384410161s: waiting for machine to come up
	I1030 19:45:33.901397  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.400978  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.901476  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.401772  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.420824  446887 api_server.go:72] duration metric: took 2.020238714s to wait for apiserver process to appear ...
	I1030 19:45:35.420862  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:35.420889  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.795897  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.795931  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.795948  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.848032  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.848069  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.921286  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.930778  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:37.930822  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.421866  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.429247  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.429291  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.921655  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.928650  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.928680  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:39.421195  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:39.425565  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:45:39.433509  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:39.433543  446887 api_server.go:131] duration metric: took 4.01267362s to wait for apiserver health ...
	I1030 19:45:39.433555  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:39.433564  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:39.435645  446887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:39.437042  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:39.456091  446887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:39.477617  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:39.485998  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:39.486041  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:39.486051  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:39.486061  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:39.486071  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:39.486082  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:45:39.486087  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:39.486092  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:39.486095  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:45:39.486101  446887 system_pods.go:74] duration metric: took 8.467537ms to wait for pod list to return data ...
	I1030 19:45:39.486110  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:39.490771  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:39.490793  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:39.490805  446887 node_conditions.go:105] duration metric: took 4.690594ms to run NodePressure ...
	I1030 19:45:39.490821  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:39.752369  446887 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757080  446887 kubeadm.go:739] kubelet initialised
	I1030 19:45:39.757105  446887 kubeadm.go:740] duration metric: took 4.707251ms waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757114  446887 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:39.762374  446887 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.766904  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766934  446887 pod_ready.go:82] duration metric: took 4.529466ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.766948  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766958  446887 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.771681  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771705  446887 pod_ready.go:82] duration metric: took 4.73772ms for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.771715  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771722  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.776170  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776199  446887 pod_ready.go:82] duration metric: took 4.470353ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.776211  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776220  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.881949  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.881988  446887 pod_ready.go:82] duration metric: took 105.756203ms for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.882027  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.882042  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.281665  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281703  446887 pod_ready.go:82] duration metric: took 399.651747ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.281716  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281725  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.680827  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680861  446887 pod_ready.go:82] duration metric: took 399.128654ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.680873  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680883  446887 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:41.086176  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086203  446887 pod_ready.go:82] duration metric: took 405.311117ms for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:41.086216  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086225  446887 pod_ready.go:39] duration metric: took 1.32910228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:41.086246  446887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:45:41.100836  446887 ops.go:34] apiserver oom_adj: -16
	I1030 19:45:41.100871  446887 kubeadm.go:597] duration metric: took 9.31128777s to restartPrimaryControlPlane
	I1030 19:45:41.100887  446887 kubeadm.go:394] duration metric: took 9.358460424s to StartCluster
	I1030 19:45:41.100915  446887 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.101046  446887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:45:41.103578  446887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.103910  446887 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:45:41.103995  446887 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:45:41.104111  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:41.104131  446887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104151  446887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104159  446887 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:45:41.104175  446887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104198  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104207  446887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104218  446887 addons.go:243] addon metrics-server should already be in state true
	I1030 19:45:41.104153  446887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104255  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104258  446887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-768989"
	I1030 19:45:41.104672  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104683  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104694  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104718  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104728  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104730  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.105606  446887 out.go:177] * Verifying Kubernetes components...
	I1030 19:45:41.107136  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:41.121415  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I1030 19:45:41.122053  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.122694  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.122721  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.123073  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.123682  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.123733  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.125497  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1030 19:45:41.125546  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I1030 19:45:41.125878  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.125962  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.126425  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126445  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126465  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126507  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126840  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.126897  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.127362  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.127392  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.127590  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.131397  446887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.131424  446887 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:45:41.131457  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.131834  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.131877  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.143183  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1030 19:45:41.143221  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I1030 19:45:41.143628  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.143765  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.144231  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144249  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144369  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144392  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144657  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144766  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144879  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.144926  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.146739  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.146913  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.148740  446887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:45:41.148794  446887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:45:41.149853  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1030 19:45:41.150250  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.150397  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:45:41.150435  446887 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:45:41.150462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150525  446887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.150545  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:45:41.150562  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150763  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.150781  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.151168  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.152135  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.152184  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.154133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154425  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154625  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.154654  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154811  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.154996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155033  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.155059  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.155145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.155310  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.155345  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155464  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155548  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.168971  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1030 19:45:41.169445  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.169946  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.169969  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.170335  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.170508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.172162  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.172378  446887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.172394  446887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:45:41.172410  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.175214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.175643  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175795  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.175978  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.176133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.176301  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.324093  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:41.381986  446887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:41.439497  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:45:41.439522  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:45:41.448751  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.486707  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:45:41.486736  446887 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:45:41.514478  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.514513  446887 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:45:41.546821  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.590509  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.879189  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879224  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879548  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:41.879597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879608  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.879622  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879632  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879868  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879886  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.889008  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.889024  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.889273  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.889290  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499223  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499621  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499632  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499689  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499969  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499984  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499996  446887 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-768989"
	I1030 19:45:42.598713  446887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008157275s)
	I1030 19:45:42.598770  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.598782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599088  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599109  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.599117  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.599143  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:42.599201  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599447  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599461  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.601840  446887 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1030 19:45:39.963885  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:39.964308  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:39.964346  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:39.964250  448226 retry.go:31] will retry after 4.32150593s: waiting for machine to come up
	I1030 19:45:42.603197  446887 addons.go:510] duration metric: took 1.499214294s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1030 19:45:43.386074  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:45.631177  447486 start.go:364] duration metric: took 3m33.722307877s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:45:45.631272  447486 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:45.631284  447486 fix.go:54] fixHost starting: 
	I1030 19:45:45.631708  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:45.631767  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:45.648654  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1030 19:45:45.649098  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:45.649552  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:45:45.649574  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:45.649848  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:45.650005  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:45:45.650153  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:45:45.651624  447486 fix.go:112] recreateIfNeeded on old-k8s-version-516975: state=Stopped err=<nil>
	I1030 19:45:45.651661  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	W1030 19:45:45.651805  447486 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:45.654065  447486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	I1030 19:45:45.655382  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .Start
	I1030 19:45:45.655554  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:45:45.656134  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:45:45.656518  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:45:45.656885  447486 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:45:45.657501  447486 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:45:44.289530  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289944  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has current primary IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289965  446965 main.go:141] libmachine: (embed-certs-042402) Found IP for machine: 192.168.61.235
	I1030 19:45:44.289978  446965 main.go:141] libmachine: (embed-certs-042402) Reserving static IP address...
	I1030 19:45:44.290419  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.290450  446965 main.go:141] libmachine: (embed-certs-042402) Reserved static IP address: 192.168.61.235
	I1030 19:45:44.290469  446965 main.go:141] libmachine: (embed-certs-042402) DBG | skip adding static IP to network mk-embed-certs-042402 - found existing host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"}
	I1030 19:45:44.290502  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Getting to WaitForSSH function...
	I1030 19:45:44.290519  446965 main.go:141] libmachine: (embed-certs-042402) Waiting for SSH to be available...
	I1030 19:45:44.292418  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292684  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.292727  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292750  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH client type: external
	I1030 19:45:44.292785  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa (-rw-------)
	I1030 19:45:44.292839  446965 main.go:141] libmachine: (embed-certs-042402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:44.292856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | About to run SSH command:
	I1030 19:45:44.292873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | exit 0
	I1030 19:45:44.414810  446965 main.go:141] libmachine: (embed-certs-042402) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:44.415211  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetConfigRaw
	I1030 19:45:44.416039  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.418830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419269  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.419303  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419529  446965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/config.json ...
	I1030 19:45:44.419832  446965 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:44.419859  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:44.420102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.422359  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422704  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.422729  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422878  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.423072  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423217  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423355  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.423493  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.423677  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.423685  446965 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:44.527214  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:44.527248  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527526  446965 buildroot.go:166] provisioning hostname "embed-certs-042402"
	I1030 19:45:44.527562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527793  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.530474  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.530830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.530856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.531041  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.531243  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531432  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531563  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.531736  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.531958  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.531979  446965 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-042402 && echo "embed-certs-042402" | sudo tee /etc/hostname
	I1030 19:45:44.656963  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-042402
	
	I1030 19:45:44.656996  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.659958  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660361  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.660397  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660643  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.660842  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661122  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.661295  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.661469  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.661484  446965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-042402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-042402/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-042402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:44.771688  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:44.771728  446965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:44.771755  446965 buildroot.go:174] setting up certificates
	I1030 19:45:44.771766  446965 provision.go:84] configureAuth start
	I1030 19:45:44.771780  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.772120  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.774838  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775271  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.775298  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775424  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.777432  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777765  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.777793  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777910  446965 provision.go:143] copyHostCerts
	I1030 19:45:44.777990  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:44.778006  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:44.778057  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:44.778147  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:44.778155  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:44.778174  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:44.778229  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:44.778237  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:44.778253  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:44.778360  446965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.embed-certs-042402 san=[127.0.0.1 192.168.61.235 embed-certs-042402 localhost minikube]
	I1030 19:45:45.019172  446965 provision.go:177] copyRemoteCerts
	I1030 19:45:45.019234  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:45.019265  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.022052  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022402  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.022435  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022590  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.022788  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.022969  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.023123  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.104733  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:45.128256  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:45:45.150758  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:45:45.173233  446965 provision.go:87] duration metric: took 401.450922ms to configureAuth
	I1030 19:45:45.173268  446965 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:45.173465  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:45.173562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.176259  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.176698  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176826  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.177025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177190  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177364  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.177554  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.177724  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.177737  446965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:45.396562  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:45.396593  446965 machine.go:96] duration metric: took 976.740759ms to provisionDockerMachine
	I1030 19:45:45.396606  446965 start.go:293] postStartSetup for "embed-certs-042402" (driver="kvm2")
	I1030 19:45:45.396616  446965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:45.396644  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.397007  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:45.397048  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.399581  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.399930  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.399955  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.400045  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.400219  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.400373  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.400483  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.481722  446965 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:45.487207  446965 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:45.487231  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:45.487304  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:45.487398  446965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:45.487516  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:45.500340  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:45.524930  446965 start.go:296] duration metric: took 128.310254ms for postStartSetup
	I1030 19:45:45.524972  446965 fix.go:56] duration metric: took 19.709339085s for fixHost
	I1030 19:45:45.524993  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.527426  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527751  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.527775  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.528145  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528326  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528450  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.528591  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.528804  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.528815  446965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:45.630961  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317545.604586107
	
	I1030 19:45:45.630997  446965 fix.go:216] guest clock: 1730317545.604586107
	I1030 19:45:45.631020  446965 fix.go:229] Guest: 2024-10-30 19:45:45.604586107 +0000 UTC Remote: 2024-10-30 19:45:45.524975841 +0000 UTC m=+302.540999350 (delta=79.610266ms)
	I1030 19:45:45.631054  446965 fix.go:200] guest clock delta is within tolerance: 79.610266ms
	I1030 19:45:45.631062  446965 start.go:83] releasing machines lock for "embed-certs-042402", held for 19.81546348s
	I1030 19:45:45.631109  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.631396  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:45.634114  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634524  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.634558  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634739  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635353  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635646  446965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:45.635692  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.635746  446965 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:45.635775  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.638260  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638639  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.638694  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638718  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639108  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.639128  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.639160  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639260  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639371  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639440  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639509  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.639581  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639723  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.747515  446965 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:45.754851  446965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:45.904471  446965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:45.911348  446965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:45.911428  446965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:45.928273  446965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:45.928299  446965 start.go:495] detecting cgroup driver to use...
	I1030 19:45:45.928381  446965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:45.949100  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:45.963284  446965 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:45.963362  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:45.976952  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:45.991367  446965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:46.104670  446965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:46.254049  446965 docker.go:233] disabling docker service ...
	I1030 19:45:46.254130  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:46.273226  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:46.290211  446965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:46.491658  446965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:46.637447  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:46.654517  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:46.679786  446965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:46.679879  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.695487  446965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:46.695570  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.708974  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.724847  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.736912  446965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:46.749015  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.761190  446965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.780198  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.790865  446965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:46.800950  446965 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:46.801029  446965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:46.814792  446965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:46.825490  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:46.952367  446965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:47.054874  446965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:47.054962  446965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:47.061036  446965 start.go:563] Will wait 60s for crictl version
	I1030 19:45:47.061105  446965 ssh_runner.go:195] Run: which crictl
	I1030 19:45:47.064917  446965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:47.101690  446965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:47.101796  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.131286  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.166314  446965 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:47.167861  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:47.171097  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171438  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:47.171466  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171737  446965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:47.177796  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:47.191930  446965 kubeadm.go:883] updating cluster {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:47.192090  446965 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:47.192149  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:47.231586  446965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:47.231672  446965 ssh_runner.go:195] Run: which lz4
	I1030 19:45:47.236190  446965 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:47.240803  446965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:47.240888  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:45.386683  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:47.386771  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:48.387313  446887 node_ready.go:49] node "default-k8s-diff-port-768989" has status "Ready":"True"
	I1030 19:45:48.387344  446887 node_ready.go:38] duration metric: took 7.005318984s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:48.387359  446887 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:48.395198  446887 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401276  446887 pod_ready.go:93] pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:48.401306  446887 pod_ready.go:82] duration metric: took 6.071305ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401321  446887 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:47.003397  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:45:47.004281  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.004710  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.004787  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.004695  448432 retry.go:31] will retry after 234.659459ms: waiting for machine to come up
	I1030 19:45:47.241308  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.241838  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.241863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.241802  448432 retry.go:31] will retry after 350.804975ms: waiting for machine to come up
	I1030 19:45:47.594533  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.595106  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.595139  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.595044  448432 retry.go:31] will retry after 448.637889ms: waiting for machine to come up
	I1030 19:45:48.045858  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.046358  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.046386  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.046315  448432 retry.go:31] will retry after 543.947609ms: waiting for machine to come up
	I1030 19:45:48.592474  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.592908  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.592937  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.592875  448432 retry.go:31] will retry after 744.106735ms: waiting for machine to come up
	I1030 19:45:49.338345  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:49.338833  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:49.338857  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:49.338795  448432 retry.go:31] will retry after 927.743369ms: waiting for machine to come up
	I1030 19:45:50.267844  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:50.268359  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:50.268390  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:50.268324  448432 retry.go:31] will retry after 829.540351ms: waiting for machine to come up
	I1030 19:45:51.099379  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:51.099863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:51.099893  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:51.099820  448432 retry.go:31] will retry after 898.768304ms: waiting for machine to come up
	I1030 19:45:48.672337  446965 crio.go:462] duration metric: took 1.436158626s to copy over tarball
	I1030 19:45:48.672439  446965 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:50.859055  446965 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186572123s)
	I1030 19:45:50.859101  446965 crio.go:469] duration metric: took 2.186725028s to extract the tarball
	I1030 19:45:50.859113  446965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:50.896570  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:50.946526  446965 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:50.946558  446965 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:50.946567  446965 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.31.2 crio true true} ...
	I1030 19:45:50.946668  446965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-042402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:50.946748  446965 ssh_runner.go:195] Run: crio config
	I1030 19:45:50.992305  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:50.992337  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:50.992348  446965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:50.992374  446965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-042402 NodeName:embed-certs-042402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:50.992530  446965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-042402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:50.992616  446965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:51.002586  446965 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:51.002668  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:51.012058  446965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1030 19:45:51.028645  446965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:51.044912  446965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1030 19:45:51.060991  446965 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:51.064808  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:51.076790  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:51.205861  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:51.224763  446965 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402 for IP: 192.168.61.235
	I1030 19:45:51.224791  446965 certs.go:194] generating shared ca certs ...
	I1030 19:45:51.224812  446965 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:51.224986  446965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:51.225046  446965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:51.225059  446965 certs.go:256] generating profile certs ...
	I1030 19:45:51.225175  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/client.key
	I1030 19:45:51.225256  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key.f6f7691e
	I1030 19:45:51.225314  446965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key
	I1030 19:45:51.225469  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:51.225518  446965 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:51.225540  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:51.225574  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:51.225612  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:51.225651  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:51.225714  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:51.226718  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:51.278345  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:51.308707  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:51.349986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:51.382176  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1030 19:45:51.426538  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 19:45:51.457131  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:51.481165  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:45:51.505285  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:51.533986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:51.562660  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:51.586002  446965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:51.602544  446965 ssh_runner.go:195] Run: openssl version
	I1030 19:45:51.608479  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:51.620650  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625243  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625294  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.631138  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:51.643167  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:51.655128  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659528  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659600  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.665370  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:51.676314  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:51.687386  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692170  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692228  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.697897  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:51.709561  446965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:51.715357  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:51.723291  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:51.731362  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:51.739724  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:51.747383  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:51.753472  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:51.759462  446965 kubeadm.go:392] StartCluster: {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:51.759605  446965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:51.759702  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.806863  446965 cri.go:89] found id: ""
	I1030 19:45:51.806956  446965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:51.818195  446965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:51.818218  446965 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:51.818274  446965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:51.828762  446965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:51.830149  446965 kubeconfig.go:125] found "embed-certs-042402" server: "https://192.168.61.235:8443"
	I1030 19:45:51.832269  446965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:51.842769  446965 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.235
	I1030 19:45:51.842808  446965 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:51.842823  446965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:51.842889  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.887128  446965 cri.go:89] found id: ""
	I1030 19:45:51.887209  446965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:51.911918  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:51.922685  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:51.922714  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:51.922770  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:45:51.935548  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:51.935620  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:51.948635  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:45:51.961647  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:51.961745  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:51.975880  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:45:51.986852  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:51.986922  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:52.001290  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:45:52.015249  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:52.015333  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:52.026657  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:52.038560  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:52.167697  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:50.408274  446887 pod_ready.go:103] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:51.407818  446887 pod_ready.go:93] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.407850  446887 pod_ready.go:82] duration metric: took 3.006520689s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.407865  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413452  446887 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.413481  446887 pod_ready.go:82] duration metric: took 5.607077ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413495  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:52.000678  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:52.001196  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:52.001235  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:52.001148  448432 retry.go:31] will retry after 1.750749509s: waiting for machine to come up
	I1030 19:45:53.753607  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:53.754013  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:53.754038  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:53.753950  448432 retry.go:31] will retry after 1.537350682s: waiting for machine to come up
	I1030 19:45:55.293910  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:55.294396  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:55.294427  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:55.294336  448432 retry.go:31] will retry after 2.151521323s: waiting for machine to come up
	I1030 19:45:53.477258  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.309509141s)
	I1030 19:45:53.477309  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.696850  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.768419  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.863913  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:53.864018  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.364235  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.864820  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.887333  446965 api_server.go:72] duration metric: took 1.023419155s to wait for apiserver process to appear ...
	I1030 19:45:54.887363  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:54.887399  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:54.887929  446965 api_server.go:269] stopped: https://192.168.61.235:8443/healthz: Get "https://192.168.61.235:8443/healthz": dial tcp 192.168.61.235:8443: connect: connection refused
	I1030 19:45:55.388396  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.610916  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:57.610951  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:57.610972  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.745722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.745782  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:57.887887  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.895296  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.895352  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:54.167893  446887 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:54.920921  446887 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.920954  446887 pod_ready.go:82] duration metric: took 3.507449937s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.920974  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927123  446887 pod_ready.go:93] pod "kube-proxy-tsr5q" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.927150  446887 pod_ready.go:82] duration metric: took 6.167749ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927164  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932513  446887 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.932540  446887 pod_ready.go:82] duration metric: took 5.367579ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932557  446887 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:56.939174  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.388076  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.393192  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:58.393235  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:58.887710  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.891923  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:45:58.897783  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:58.897816  446965 api_server.go:131] duration metric: took 4.010443495s to wait for apiserver health ...
	I1030 19:45:58.897836  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:58.897844  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:58.899669  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:57.447894  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:57.448365  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:57.448392  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:57.448320  448432 retry.go:31] will retry after 2.439938206s: waiting for machine to come up
	I1030 19:45:59.889685  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:59.890166  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:59.890205  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:59.890113  448432 retry.go:31] will retry after 3.836080386s: waiting for machine to come up
	I1030 19:45:58.901122  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:58.924765  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:58.946342  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:58.956378  446965 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:58.956412  446965 system_pods.go:61] "coredns-7c65d6cfc9-tv6kc" [d752975e-e126-4d22-9b35-b9f57d1170b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:58.956419  446965 system_pods.go:61] "etcd-embed-certs-042402" [fa9b90f6-82b2-448a-ad86-9cbba45a4c2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:58.956427  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [48af3136-74d9-4062-bb9a-e48dafd311a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:58.956436  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [0ae60724-6634-464a-af2f-e08148fb3eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:58.956445  446965 system_pods.go:61] "kube-proxy-qwjr9" [309ee447-8d52-49e7-a805-2b7c0af2a3bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 19:45:58.956450  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [f82ff11e-8305-4d05-b370-fd89693e5ad1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:58.956454  446965 system_pods.go:61] "metrics-server-6867b74b74-4x9t6" [1160789d-9462-4d1d-9f84-5ded8394bd4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:58.956459  446965 system_pods.go:61] "storage-provisioner" [d1559440-b14a-4c2a-a52e-ba39afb01f94] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 19:45:58.956465  446965 system_pods.go:74] duration metric: took 10.103898ms to wait for pod list to return data ...
	I1030 19:45:58.956473  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:58.960150  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:58.960182  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:58.960195  446965 node_conditions.go:105] duration metric: took 3.712942ms to run NodePressure ...
	I1030 19:45:58.960219  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:59.284558  446965 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289073  446965 kubeadm.go:739] kubelet initialised
	I1030 19:45:59.289095  446965 kubeadm.go:740] duration metric: took 4.508144ms waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289104  446965 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:59.293538  446965 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:01.298780  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.940597  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:01.439118  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.011617  446736 start.go:364] duration metric: took 52.494265895s to acquireMachinesLock for "no-preload-960512"
	I1030 19:46:05.011674  446736 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:46:05.011683  446736 fix.go:54] fixHost starting: 
	I1030 19:46:05.012022  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:05.012087  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:05.029067  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I1030 19:46:05.029484  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:05.030010  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:05.030039  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:05.030461  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:05.030690  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:05.030854  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:05.032380  446736 fix.go:112] recreateIfNeeded on no-preload-960512: state=Stopped err=<nil>
	I1030 19:46:05.032408  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	W1030 19:46:05.032566  446736 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:46:05.035693  446736 out.go:177] * Restarting existing kvm2 VM for "no-preload-960512" ...
	I1030 19:46:03.727617  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728028  447486 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:46:03.728046  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:46:03.728062  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728565  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:46:03.728600  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:46:03.728616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.728639  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | skip adding static IP to network mk-old-k8s-version-516975 - found existing host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"}
	I1030 19:46:03.728657  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:46:03.730754  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731085  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.731121  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731145  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:46:03.731212  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:46:03.731252  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:03.731275  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:46:03.731289  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:46:03.862423  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:03.862832  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:46:03.863519  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:03.865977  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866262  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.866297  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866512  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:46:03.866755  447486 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:03.866783  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:03.866994  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.869079  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869384  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.869410  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869603  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.869787  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.869949  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.870102  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.870243  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.870468  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.870481  447486 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:03.982986  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:03.983018  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983285  447486 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:46:03.983319  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983502  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.986203  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986576  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.986615  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986765  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.986983  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987126  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987258  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.987419  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.987696  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.987719  447486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:46:04.112692  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:46:04.112719  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.115948  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116283  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.116309  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116482  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.116667  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116842  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116966  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.117104  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.117275  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.117290  447486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:04.235988  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:04.236032  447486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:04.236098  447486 buildroot.go:174] setting up certificates
	I1030 19:46:04.236111  447486 provision.go:84] configureAuth start
	I1030 19:46:04.236124  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:04.236500  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:04.239328  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.239707  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.239739  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.240009  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.242118  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242440  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.242505  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242683  447486 provision.go:143] copyHostCerts
	I1030 19:46:04.242766  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:04.242787  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:04.242847  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:04.242972  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:04.242986  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:04.243011  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:04.243072  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:04.243079  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:04.243095  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:04.243153  447486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:46:04.355003  447486 provision.go:177] copyRemoteCerts
	I1030 19:46:04.355061  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:04.355092  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.357788  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358153  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.358191  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358397  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.358630  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.358809  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.358970  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.446614  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:04.473708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:46:04.497721  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:46:04.521806  447486 provision.go:87] duration metric: took 285.682041ms to configureAuth
	I1030 19:46:04.521836  447486 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:04.521999  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:46:04.522072  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.524616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525034  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.525065  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525282  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.525452  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525616  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.525916  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.526129  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.526145  447486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:04.766663  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:04.766697  447486 machine.go:96] duration metric: took 899.924211ms to provisionDockerMachine
	I1030 19:46:04.766709  447486 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:46:04.766720  447486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:04.766745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:04.767081  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:04.767114  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.769995  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770401  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.770428  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770580  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.770762  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.770973  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.771132  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.858006  447486 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:04.862295  447486 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:04.862324  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:04.862387  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:04.862475  447486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:04.862612  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:04.872541  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:04.896306  447486 start.go:296] duration metric: took 129.577956ms for postStartSetup
	I1030 19:46:04.896360  447486 fix.go:56] duration metric: took 19.265077419s for fixHost
	I1030 19:46:04.896383  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.899009  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899397  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.899429  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899538  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.899739  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.899906  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.900101  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.900271  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.900510  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.900525  447486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:05.011439  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317564.967936408
	
	I1030 19:46:05.011464  447486 fix.go:216] guest clock: 1730317564.967936408
	I1030 19:46:05.011472  447486 fix.go:229] Guest: 2024-10-30 19:46:04.967936408 +0000 UTC Remote: 2024-10-30 19:46:04.896364572 +0000 UTC m=+233.135558535 (delta=71.571836ms)
	I1030 19:46:05.011516  447486 fix.go:200] guest clock delta is within tolerance: 71.571836ms
	I1030 19:46:05.011525  447486 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 19.380292064s
	I1030 19:46:05.011552  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.011853  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:05.014722  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015072  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.015100  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015225  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.015808  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016002  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016107  447486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:05.016155  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.016265  447486 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:05.016296  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.018976  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019189  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019326  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019370  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019517  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019604  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019632  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019708  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.019830  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019918  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.019995  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.020077  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.020157  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.020295  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.100852  447486 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:05.127673  447486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:05.279889  447486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:05.285900  447486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:05.285976  447486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:05.304763  447486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:05.304791  447486 start.go:495] detecting cgroup driver to use...
	I1030 19:46:05.304862  447486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:05.325729  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:05.343047  447486 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:05.343128  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:05.358748  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:05.374769  447486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:05.492589  447486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:05.639943  447486 docker.go:233] disabling docker service ...
	I1030 19:46:05.640039  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:05.655449  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:05.669688  447486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:05.814658  447486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:05.957944  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:05.972122  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:05.990577  447486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:46:05.990653  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.000834  447486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:06.000907  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.011678  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.022051  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.032515  447486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:06.043296  447486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:06.053123  447486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:06.053170  447486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:06.067625  447486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:06.081306  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:06.221181  447486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:06.321848  447486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:06.321926  447486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:06.329697  447486 start.go:563] Will wait 60s for crictl version
	I1030 19:46:06.329757  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:06.333980  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:06.381198  447486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:06.381290  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.410365  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.442329  447486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:46:06.443471  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:06.446233  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446621  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:06.446653  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446822  447486 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:06.451216  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:06.464477  447486 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:06.464607  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:46:06.464668  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:06.513123  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:06.513205  447486 ssh_runner.go:195] Run: which lz4
	I1030 19:46:06.517252  447486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:46:06.521358  447486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:46:06.521384  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:46:03.300213  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.301139  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.303015  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:03.939240  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.940212  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.942062  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.037179  446736 main.go:141] libmachine: (no-preload-960512) Calling .Start
	I1030 19:46:05.037388  446736 main.go:141] libmachine: (no-preload-960512) Ensuring networks are active...
	I1030 19:46:05.038384  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network default is active
	I1030 19:46:05.038793  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network mk-no-preload-960512 is active
	I1030 19:46:05.039208  446736 main.go:141] libmachine: (no-preload-960512) Getting domain xml...
	I1030 19:46:05.040083  446736 main.go:141] libmachine: (no-preload-960512) Creating domain...
	I1030 19:46:06.366674  446736 main.go:141] libmachine: (no-preload-960512) Waiting to get IP...
	I1030 19:46:06.367568  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.368016  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.368083  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.367984  448568 retry.go:31] will retry after 216.900908ms: waiting for machine to come up
	I1030 19:46:06.586638  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.587182  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.587213  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.587121  448568 retry.go:31] will retry after 319.082011ms: waiting for machine to come up
	I1030 19:46:06.907974  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.908650  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.908683  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.908581  448568 retry.go:31] will retry after 418.339306ms: waiting for machine to come up
	I1030 19:46:07.328241  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.329035  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.329065  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.328988  448568 retry.go:31] will retry after 523.624135ms: waiting for machine to come up
	I1030 19:46:07.855234  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.855944  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.855970  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.855849  448568 retry.go:31] will retry after 556.06146ms: waiting for machine to come up
	I1030 19:46:08.413474  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:08.414059  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:08.414098  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:08.413947  448568 retry.go:31] will retry after 713.043389ms: waiting for machine to come up
	I1030 19:46:09.128274  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:09.128737  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:09.128762  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:09.128689  448568 retry.go:31] will retry after 1.096111238s: waiting for machine to come up
	I1030 19:46:08.144772  447486 crio.go:462] duration metric: took 1.627547543s to copy over tarball
	I1030 19:46:08.144845  447486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:46:11.104192  447486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959302647s)
	I1030 19:46:11.104228  447486 crio.go:469] duration metric: took 2.959426051s to extract the tarball
	I1030 19:46:11.104240  447486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:46:11.146584  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:11.183766  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:11.183797  447486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:11.183889  447486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.183917  447486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.183932  447486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.183968  447486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.184087  447486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.183972  447486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:46:11.183969  447486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.183928  447486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.185976  447486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.186001  447486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:46:11.186043  447486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.186053  447486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.186046  447486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.185977  447486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.186108  447486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.186150  447486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.348134  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391191  447486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:46:11.391327  447486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391399  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.396693  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.400062  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.406656  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:46:11.410534  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.410590  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.441896  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.460400  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.482465  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.554431  447486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:46:11.554480  447486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.554549  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.610376  447486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:46:11.610424  447486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:46:11.610471  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616060  447486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:46:11.616104  447486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.616153  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616177  447486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:46:11.616217  447486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.616282  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.617473  447486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:46:11.617502  447486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.617535  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652124  447486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:46:11.652185  447486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.652228  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.652233  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652237  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.652331  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.652376  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.652433  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.652483  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.798844  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.798859  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:46:11.798873  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.798949  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.799075  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.799179  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.799182  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:08.303450  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.303482  446965 pod_ready.go:82] duration metric: took 9.009918893s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.303498  446965 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312186  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.312213  446965 pod_ready.go:82] duration metric: took 8.706192ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312228  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:10.320161  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.439107  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:12.439663  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.226842  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:10.227315  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:10.227346  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:10.227261  448568 retry.go:31] will retry after 1.165335625s: waiting for machine to come up
	I1030 19:46:11.394231  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:11.394817  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:11.394851  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:11.394763  448568 retry.go:31] will retry after 1.292571083s: waiting for machine to come up
	I1030 19:46:12.688486  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:12.688919  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:12.688965  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:12.688862  448568 retry.go:31] will retry after 1.97645889s: waiting for machine to come up
	I1030 19:46:14.667783  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:14.668245  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:14.668278  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:14.668200  448568 retry.go:31] will retry after 2.020488863s: waiting for machine to come up
	I1030 19:46:11.942258  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.942265  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.942365  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.942352  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.942421  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.946933  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:12.064951  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:46:12.067930  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:12.067990  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:46:12.068057  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:46:12.068078  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:46:12.083122  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:46:12.107265  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:46:13.402970  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:13.551979  447486 cache_images.go:92] duration metric: took 2.368158873s to LoadCachedImages
	W1030 19:46:13.552080  447486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:46:13.552096  447486 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:46:13.552211  447486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:13.552276  447486 ssh_runner.go:195] Run: crio config
	I1030 19:46:13.605982  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:46:13.606008  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:13.606020  447486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:13.606049  447486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:46:13.606223  447486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:13.606299  447486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:46:13.616954  447486 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:13.617034  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:13.627440  447486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:46:13.644821  447486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:13.662070  447486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:46:13.679198  447486 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:13.682992  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:13.697879  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:13.819975  447486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:13.838669  447486 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:46:13.838695  447486 certs.go:194] generating shared ca certs ...
	I1030 19:46:13.838716  447486 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:13.838888  447486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:13.838946  447486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:13.838962  447486 certs.go:256] generating profile certs ...
	I1030 19:46:13.839064  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:46:13.839149  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:46:13.839208  447486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:46:13.839375  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:13.839429  447486 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:13.839442  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:13.839476  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:13.839509  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:13.839545  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:13.839609  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:13.840381  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:13.868947  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:13.923848  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:13.973167  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:14.009333  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:46:14.042397  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:14.073927  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:14.109209  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:46:14.135708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:14.162145  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:14.186176  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:14.210362  447486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:14.228727  447486 ssh_runner.go:195] Run: openssl version
	I1030 19:46:14.234436  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:14.245497  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250026  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250077  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.255727  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:14.266674  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:14.277813  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282378  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282435  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.288338  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:14.300057  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:14.312295  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317488  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317555  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.323518  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:14.335182  447486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:14.339998  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:14.346145  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:14.352474  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:14.358687  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:14.364275  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:14.370038  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:14.376051  447486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:14.376144  447486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:14.376187  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.423395  447486 cri.go:89] found id: ""
	I1030 19:46:14.423477  447486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:14.435404  447486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:14.435485  447486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:14.435558  447486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:14.448035  447486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:14.448911  447486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:14.449557  447486 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-516975" cluster setting kubeconfig missing "old-k8s-version-516975" context setting]
	I1030 19:46:14.450419  447486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:14.452252  447486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:14.462634  447486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I1030 19:46:14.462676  447486 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:14.462693  447486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:14.462750  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.508286  447486 cri.go:89] found id: ""
	I1030 19:46:14.508380  447486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:14.527996  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:14.539011  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:14.539037  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:14.539094  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:14.550159  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:14.550243  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:14.561350  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:14.571353  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:14.571430  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:14.584480  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.598307  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:14.598400  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.611632  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:14.621644  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:14.621705  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:14.632161  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:14.642295  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:14.783130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.694839  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.923329  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.052124  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.143607  447486 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:16.143710  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:16.643943  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:13.245727  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:13.702440  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.702472  446965 pod_ready.go:82] duration metric: took 5.390235543s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.702497  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948519  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.948549  446965 pod_ready.go:82] duration metric: took 246.042214ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948565  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958077  446965 pod_ready.go:93] pod "kube-proxy-qwjr9" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.958108  446965 pod_ready.go:82] duration metric: took 9.534813ms for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958122  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974906  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.974931  446965 pod_ready.go:82] duration metric: took 16.800547ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974944  446965 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:15.982433  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:17.983261  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:14.440176  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.939769  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.690435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:16.690908  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:16.690997  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:16.690904  448568 retry.go:31] will retry after 2.729556206s: waiting for machine to come up
	I1030 19:46:19.423740  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:19.424246  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:19.424271  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:19.424195  448568 retry.go:31] will retry after 2.822049517s: waiting for machine to come up
	I1030 19:46:17.144678  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.644772  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.144037  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.644437  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.144273  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.643801  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.144200  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.644764  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.143898  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.643960  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.481213  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.981619  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:19.438946  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:21.938706  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.247395  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:22.247840  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:22.247869  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:22.247813  448568 retry.go:31] will retry after 5.243633747s: waiting for machine to come up
	I1030 19:46:22.144625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.644446  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.144207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.644001  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.143787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.644166  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.144397  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.644654  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.144214  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.644275  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.482032  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.981111  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:23.940402  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:26.439369  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.494630  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495107  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has current primary IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495146  446736 main.go:141] libmachine: (no-preload-960512) Found IP for machine: 192.168.72.132
	I1030 19:46:27.495159  446736 main.go:141] libmachine: (no-preload-960512) Reserving static IP address...
	I1030 19:46:27.495588  446736 main.go:141] libmachine: (no-preload-960512) Reserved static IP address: 192.168.72.132
	I1030 19:46:27.495612  446736 main.go:141] libmachine: (no-preload-960512) Waiting for SSH to be available...
	I1030 19:46:27.495634  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.495664  446736 main.go:141] libmachine: (no-preload-960512) DBG | skip adding static IP to network mk-no-preload-960512 - found existing host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"}
	I1030 19:46:27.495678  446736 main.go:141] libmachine: (no-preload-960512) DBG | Getting to WaitForSSH function...
	I1030 19:46:27.497679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498051  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.498083  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498231  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH client type: external
	I1030 19:46:27.498273  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa (-rw-------)
	I1030 19:46:27.498316  446736 main.go:141] libmachine: (no-preload-960512) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:27.498344  446736 main.go:141] libmachine: (no-preload-960512) DBG | About to run SSH command:
	I1030 19:46:27.498355  446736 main.go:141] libmachine: (no-preload-960512) DBG | exit 0
	I1030 19:46:27.626476  446736 main.go:141] libmachine: (no-preload-960512) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:27.626850  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetConfigRaw
	I1030 19:46:27.627519  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:27.629913  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630288  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.630326  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630561  446736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/config.json ...
	I1030 19:46:27.630778  446736 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:27.630801  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:27.631021  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.633457  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.633849  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.633880  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.634032  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.634200  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634393  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.634741  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.634940  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.634952  446736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:27.743135  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:27.743167  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743475  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:46:27.743516  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743717  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.746369  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746726  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.746758  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746928  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.747114  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747266  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747380  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.747509  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.747740  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.747759  446736 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-960512 && echo "no-preload-960512" | sudo tee /etc/hostname
	I1030 19:46:27.872871  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-960512
	
	I1030 19:46:27.872899  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.875533  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.875867  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.875908  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.876072  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.876274  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876546  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876690  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.876851  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.877082  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.877099  446736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-960512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-960512/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-960512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:27.999551  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:27.999617  446736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:27.999654  446736 buildroot.go:174] setting up certificates
	I1030 19:46:27.999667  446736 provision.go:84] configureAuth start
	I1030 19:46:27.999689  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.999998  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.002874  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003285  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.003317  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003474  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.005987  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006376  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.006418  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006545  446736 provision.go:143] copyHostCerts
	I1030 19:46:28.006620  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:28.006639  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:28.006707  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:28.006846  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:28.006859  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:28.006898  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:28.006983  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:28.006993  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:28.007023  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:28.007102  446736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.no-preload-960512 san=[127.0.0.1 192.168.72.132 localhost minikube no-preload-960512]
	I1030 19:46:28.317424  446736 provision.go:177] copyRemoteCerts
	I1030 19:46:28.317502  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:28.317537  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.320089  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320387  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.320419  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.320776  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.320963  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.321116  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.409344  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:46:28.434874  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:28.459903  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:46:28.486949  446736 provision.go:87] duration metric: took 487.261556ms to configureAuth
	I1030 19:46:28.486981  446736 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:28.487219  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:28.487322  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.489873  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490180  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.490223  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490349  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.490561  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490719  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490827  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.491003  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.491199  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.491216  446736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:28.727045  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:28.727081  446736 machine.go:96] duration metric: took 1.096287528s to provisionDockerMachine
	I1030 19:46:28.727095  446736 start.go:293] postStartSetup for "no-preload-960512" (driver="kvm2")
	I1030 19:46:28.727106  446736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:28.727125  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.727460  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:28.727490  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.730071  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730445  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.730479  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730652  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.730858  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.731010  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.731197  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.817529  446736 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:28.822263  446736 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:28.822299  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:28.822394  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:28.822517  446736 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:28.822647  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:28.832488  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:28.858165  446736 start.go:296] duration metric: took 131.055053ms for postStartSetup
	I1030 19:46:28.858211  446736 fix.go:56] duration metric: took 23.84652817s for fixHost
	I1030 19:46:28.858235  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.861136  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861480  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.861513  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861819  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.862059  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862224  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862373  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.862582  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.862786  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.862797  446736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:28.975448  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317588.951806388
	
	I1030 19:46:28.975479  446736 fix.go:216] guest clock: 1730317588.951806388
	I1030 19:46:28.975489  446736 fix.go:229] Guest: 2024-10-30 19:46:28.951806388 +0000 UTC Remote: 2024-10-30 19:46:28.858215114 +0000 UTC m=+358.930371017 (delta=93.591274ms)
	I1030 19:46:28.975521  446736 fix.go:200] guest clock delta is within tolerance: 93.591274ms
	I1030 19:46:28.975529  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 23.963879546s
	I1030 19:46:28.975555  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.975849  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.978813  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979310  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.979341  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979608  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980197  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980429  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980522  446736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:28.980567  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.980682  446736 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:28.980710  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.984058  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984208  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984410  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984582  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984613  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984636  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984782  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.984798  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984966  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.984974  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.985121  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.985187  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.985260  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:29.063734  446736 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:29.087821  446736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:29.236289  446736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:29.242997  446736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:29.243088  446736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:29.260802  446736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:29.260836  446736 start.go:495] detecting cgroup driver to use...
	I1030 19:46:29.260930  446736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:29.279572  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:29.293359  446736 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:29.293423  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:29.306417  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:29.319617  446736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:29.440023  446736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:29.585541  446736 docker.go:233] disabling docker service ...
	I1030 19:46:29.585630  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:29.600459  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:29.613611  446736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:29.752666  446736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:29.880152  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:29.893912  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:29.913099  446736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:46:29.913160  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.923800  446736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:29.923882  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.934880  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.946088  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.956644  446736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:29.967199  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.978863  446736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.996225  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:30.006604  446736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:30.015954  446736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:30.016017  446736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:30.029194  446736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:30.041316  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:30.161438  446736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:30.257137  446736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:30.257209  446736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:30.261981  446736 start.go:563] Will wait 60s for crictl version
	I1030 19:46:30.262052  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.266275  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:30.305128  446736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:30.305228  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.335445  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.367026  446736 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:46:27.143768  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.644294  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.143819  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.643783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.144405  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.643941  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.644787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.143873  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.643857  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.982162  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:32.480878  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:28.939126  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.939780  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.368355  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:30.371260  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371651  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:30.371679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371922  446736 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:30.376282  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:30.389078  446736 kubeadm.go:883] updating cluster {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:30.389193  446736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:46:30.389228  446736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:30.423375  446736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:46:30.423402  446736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:30.423508  446736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.423562  446736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.423578  446736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.423595  446736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.423536  446736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.423634  446736 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424979  446736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.424988  446736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.424996  446736 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424987  446736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.425021  446736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.425036  446736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.425029  446736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.425061  446736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.612665  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.618602  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1030 19:46:30.636563  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.680808  446736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1030 19:46:30.680858  446736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.680911  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.749318  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.750405  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.751514  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.752746  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.768614  446736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1030 19:46:30.768663  446736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.768714  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.768723  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.881778  446736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1030 19:46:30.881811  446736 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1030 19:46:30.881821  446736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.881844  446736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.881862  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.881883  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.884827  446736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1030 19:46:30.884861  446736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.884901  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891812  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.891882  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.891907  446736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1030 19:46:30.891940  446736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.891981  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891986  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.892142  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.893781  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.992346  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.992372  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.992404  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.995602  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.995730  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.995786  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.123892  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1030 19:46:31.123996  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:31.124018  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.132177  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.132209  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:31.132311  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:31.132335  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.220011  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1030 19:46:31.220043  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220100  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220224  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1030 19:46:31.220329  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:31.262583  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1030 19:46:31.262685  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.262698  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:31.269015  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1030 19:46:31.269117  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:31.269710  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1030 19:46:31.269793  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:32.667341  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.216743  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.99661544s)
	I1030 19:46:33.216787  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1030 19:46:33.216787  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.996433716s)
	I1030 19:46:33.216820  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1030 19:46:33.216829  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216840  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.95412356s)
	I1030 19:46:33.216872  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1030 19:46:33.216884  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216925  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2: (1.954216284s)
	I1030 19:46:33.216964  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1030 19:46:33.216989  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.947854262s)
	I1030 19:46:33.217014  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1030 19:46:33.217027  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.947220506s)
	I1030 19:46:33.217040  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1030 19:46:33.217059  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:33.217140  446736 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1030 19:46:33.217178  446736 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.217222  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:32.144229  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.644079  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.643950  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.143888  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.643861  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.144210  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.644677  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.644549  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.481488  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:36.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:33.438659  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:37.440028  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.577178  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.360267806s)
	I1030 19:46:35.577219  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1030 19:46:35.577227  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.360144583s)
	I1030 19:46:35.577243  446736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.577252  446736 ssh_runner.go:235] Completed: which crictl: (2.360017291s)
	I1030 19:46:35.577259  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1030 19:46:35.577305  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:35.577309  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.615490  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492071  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.914649003s)
	I1030 19:46:39.492116  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1030 19:46:39.492142  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.876615301s)
	I1030 19:46:39.492211  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492148  446736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.492295  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.535258  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 19:46:39.535417  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:37.144681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.643833  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.143783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.644359  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.144745  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.644625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.144535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.643881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.144754  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.644070  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.302627  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.480981  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:39.940272  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:42.439827  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.566095  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.073767908s)
	I1030 19:46:41.566140  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1030 19:46:41.566167  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566169  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.030723752s)
	I1030 19:46:41.566210  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566224  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1030 19:46:43.628473  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.06223599s)
	I1030 19:46:43.628500  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1030 19:46:43.628525  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:43.628570  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:42.144672  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.644533  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.144320  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.644574  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.144465  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.644428  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.143785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.643767  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.144467  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.644496  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.481495  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.481844  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.982318  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:44.940061  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.439131  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.079808  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451207821s)
	I1030 19:46:45.079843  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1030 19:46:45.079870  446736 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:45.079918  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:46.026472  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 19:46:46.026538  446736 cache_images.go:123] Successfully loaded all cached images
	I1030 19:46:46.026547  446736 cache_images.go:92] duration metric: took 15.603128567s to LoadCachedImages
	I1030 19:46:46.026562  446736 kubeadm.go:934] updating node { 192.168.72.132 8443 v1.31.2 crio true true} ...
	I1030 19:46:46.026722  446736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-960512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:46.026819  446736 ssh_runner.go:195] Run: crio config
	I1030 19:46:46.080342  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:46.080367  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:46.080376  446736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:46.080399  446736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-960512 NodeName:no-preload-960512 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:46:46.080574  446736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-960512"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:46.080645  446736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:46:46.091323  446736 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:46.091400  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:46.100320  446736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1030 19:46:46.117369  446736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:46.133667  446736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1030 19:46:46.157251  446736 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:46.161543  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:46.173451  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:46.303532  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:46.321855  446736 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512 for IP: 192.168.72.132
	I1030 19:46:46.321883  446736 certs.go:194] generating shared ca certs ...
	I1030 19:46:46.321905  446736 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:46.322108  446736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:46.322171  446736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:46.322189  446736 certs.go:256] generating profile certs ...
	I1030 19:46:46.322294  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/client.key
	I1030 19:46:46.322381  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key.378d6029
	I1030 19:46:46.322436  446736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key
	I1030 19:46:46.322609  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:46.322649  446736 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:46.322661  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:46.322692  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:46.322727  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:46.322756  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:46.322812  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:46.323679  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:46.362339  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:46.396270  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:46.443482  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:46.468142  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:46:46.507418  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:46.534091  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:46.557105  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:46:46.579880  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:46.602665  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:46.625853  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:46.651685  446736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:46.670898  446736 ssh_runner.go:195] Run: openssl version
	I1030 19:46:46.677083  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:46.688814  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693349  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693399  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.699221  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:46.710200  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:46.721001  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725283  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725343  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.730798  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:46.741915  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:46.752767  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757109  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757150  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.762844  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:46.773796  446736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:46.778156  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:46.784099  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:46.789960  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:46.796056  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:46.801880  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:46.807680  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:46.813574  446736 kubeadm.go:392] StartCluster: {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:46.813694  446736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:46.813735  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.856225  446736 cri.go:89] found id: ""
	I1030 19:46:46.856309  446736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:46.866696  446736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:46.866721  446736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:46.866774  446736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:46.876622  446736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:46.877777  446736 kubeconfig.go:125] found "no-preload-960512" server: "https://192.168.72.132:8443"
	I1030 19:46:46.880116  446736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:46.889710  446736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.132
	I1030 19:46:46.889743  446736 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:46.889761  446736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:46.889837  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.927109  446736 cri.go:89] found id: ""
	I1030 19:46:46.927177  446736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:46.944519  446736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:46.954607  446736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:46.954626  446736 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:46.954669  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:46.963987  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:46.964086  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:46.973787  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:46.983447  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:46.983496  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:46.993101  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.003713  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:47.003773  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.013162  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:47.022411  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:47.022479  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:47.031878  446736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:47.041616  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:47.156846  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.637250  446736 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.480364831s)
	I1030 19:46:48.637284  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.836676  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.908664  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.987298  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:48.987411  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.488330  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.143932  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.644228  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.144124  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.643923  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.144466  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.643968  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.144811  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.643785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.144372  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.644019  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.983127  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.482250  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.939257  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.439840  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.988463  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.024092  446736 api_server.go:72] duration metric: took 1.036791371s to wait for apiserver process to appear ...
	I1030 19:46:50.024127  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:46:50.024155  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:50.024711  446736 api_server.go:269] stopped: https://192.168.72.132:8443/healthz: Get "https://192.168.72.132:8443/healthz": dial tcp 192.168.72.132:8443: connect: connection refused
	I1030 19:46:50.524543  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.757497  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:46:52.757537  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:46:52.757558  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.847598  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:52.847638  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.024885  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.030717  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.030749  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.524384  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.531420  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.531459  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.025006  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.030512  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.030545  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.525157  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.529426  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.529453  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.025276  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.029608  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.029634  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.525041  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.529303  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.529339  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:56.024906  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:56.029520  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:46:56.035579  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:46:56.035609  446736 api_server.go:131] duration metric: took 6.011468992s to wait for apiserver health ...
	I1030 19:46:56.035619  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:56.035625  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:56.037524  446736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:46:52.144732  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.644528  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.144074  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.643889  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.143976  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.644535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.144783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.644114  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.144728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.643846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.038963  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:46:56.050330  446736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:46:56.069509  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:46:56.079237  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:46:56.079268  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:46:56.079275  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:46:56.079283  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:46:56.079288  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:46:56.079294  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:46:56.079299  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:46:56.079304  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:46:56.079307  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:46:56.079313  446736 system_pods.go:74] duration metric: took 9.785027ms to wait for pod list to return data ...
	I1030 19:46:56.079327  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:46:56.082617  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:46:56.082644  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:46:56.082658  446736 node_conditions.go:105] duration metric: took 3.325744ms to run NodePressure ...
	I1030 19:46:56.082680  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:56.353123  446736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357714  446736 kubeadm.go:739] kubelet initialised
	I1030 19:46:56.357740  446736 kubeadm.go:740] duration metric: took 4.581883ms waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357755  446736 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:56.362687  446736 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.367124  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367153  446736 pod_ready.go:82] duration metric: took 4.443081ms for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.367165  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367180  446736 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.371747  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371774  446736 pod_ready.go:82] duration metric: took 4.580967ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.371785  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371794  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.375687  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375704  446736 pod_ready.go:82] duration metric: took 3.901023ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.375712  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375718  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.472995  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473036  446736 pod_ready.go:82] duration metric: took 97.300344ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.473047  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473056  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.873717  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873749  446736 pod_ready.go:82] duration metric: took 400.680615ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.873759  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873765  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.273361  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273392  446736 pod_ready.go:82] duration metric: took 399.61983ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.273405  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273415  446736 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.674201  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674236  446736 pod_ready.go:82] duration metric: took 400.809663ms for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.674251  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674260  446736 pod_ready.go:39] duration metric: took 1.31649331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:57.674285  446736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:46:57.687464  446736 ops.go:34] apiserver oom_adj: -16
	I1030 19:46:57.687489  446736 kubeadm.go:597] duration metric: took 10.820761471s to restartPrimaryControlPlane
	I1030 19:46:57.687498  446736 kubeadm.go:394] duration metric: took 10.873934509s to StartCluster
	I1030 19:46:57.687514  446736 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.687586  446736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:57.689255  446736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.689496  446736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:46:57.689574  446736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:46:57.689683  446736 addons.go:69] Setting storage-provisioner=true in profile "no-preload-960512"
	I1030 19:46:57.689706  446736 addons.go:234] Setting addon storage-provisioner=true in "no-preload-960512"
	I1030 19:46:57.689708  446736 addons.go:69] Setting metrics-server=true in profile "no-preload-960512"
	W1030 19:46:57.689719  446736 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:46:57.689727  446736 addons.go:234] Setting addon metrics-server=true in "no-preload-960512"
	W1030 19:46:57.689737  446736 addons.go:243] addon metrics-server should already be in state true
	I1030 19:46:57.689755  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689791  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:57.689761  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689707  446736 addons.go:69] Setting default-storageclass=true in profile "no-preload-960512"
	I1030 19:46:57.689912  446736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-960512"
	I1030 19:46:57.690245  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690258  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690264  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690297  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690303  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690322  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.691365  446736 out.go:177] * Verifying Kubernetes components...
	I1030 19:46:57.692941  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:57.727794  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1030 19:46:57.727877  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I1030 19:46:57.728127  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1030 19:46:57.728276  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728414  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728517  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728861  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.728879  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729032  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729053  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729056  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729064  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729350  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729429  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729452  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.730008  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730051  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.730124  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730362  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.731104  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.734295  446736 addons.go:234] Setting addon default-storageclass=true in "no-preload-960512"
	W1030 19:46:57.734316  446736 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:46:57.734349  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.734742  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.734810  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.747185  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1030 19:46:57.747680  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.748340  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.748360  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.748795  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.749029  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.749722  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I1030 19:46:57.750318  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.754616  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I1030 19:46:57.754666  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.755024  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.755052  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.755555  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.755672  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757159  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.757166  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.757184  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.757504  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757804  446736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:57.758045  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.758089  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.759001  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.759300  446736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:57.759313  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:46:57.759327  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.762134  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762557  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.762582  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762740  446736 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:46:54.485910  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.981415  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:54.939168  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.940263  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:57.762828  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.763037  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.763192  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.763344  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.763936  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:46:57.763953  446736 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:46:57.763970  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.766410  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.766771  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.766795  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.767034  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.767212  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.767385  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.767522  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.776037  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1030 19:46:57.776386  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.776846  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.776864  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.777184  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.777339  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.778829  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.779118  446736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:57.779138  446736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:46:57.779156  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.781325  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781590  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.781615  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781755  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.781895  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.781995  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.782088  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.895549  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:57.913030  446736 node_ready.go:35] waiting up to 6m0s for node "no-preload-960512" to be "Ready" ...
	I1030 19:46:58.008228  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:58.009206  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:46:58.009222  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:46:58.034347  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:58.036620  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:46:58.036646  446736 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:46:58.140489  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:58.140522  446736 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:46:58.181145  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:59.403246  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.368855241s)
	I1030 19:46:59.403317  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395049308s)
	I1030 19:46:59.403331  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403340  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403356  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403369  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403657  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403673  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403681  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403688  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403766  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403770  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.403778  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403790  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403796  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403939  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403954  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404023  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.404059  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404071  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411114  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.411136  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.411365  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411421  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.411437  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513065  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33186887s)
	I1030 19:46:59.513150  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513168  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513455  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513481  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513486  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513491  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513537  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513769  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513797  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513809  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513826  446736 addons.go:475] Verifying addon metrics-server=true in "no-preload-960512"
	I1030 19:46:59.516354  446736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:46:59.517886  446736 addons.go:510] duration metric: took 1.828322965s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:46:59.916839  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.143829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.644245  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.144327  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.644684  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.644799  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.144222  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.644111  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.144268  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.644631  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.982694  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:00.984014  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:59.439638  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:01.939460  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:02.416750  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:47:03.416443  446736 node_ready.go:49] node "no-preload-960512" has status "Ready":"True"
	I1030 19:47:03.416469  446736 node_ready.go:38] duration metric: took 5.503404181s for node "no-preload-960512" to be "Ready" ...
	I1030 19:47:03.416479  446736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:47:03.422219  446736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:02.143881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.644208  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.144411  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.643948  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.644179  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.144791  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.643983  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.143859  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.644436  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.481239  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.481271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.482108  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:04.439288  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:06.439454  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.428589  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.430975  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:09.928214  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.144765  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.644280  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.144381  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.644099  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.144129  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.643864  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.144105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.643752  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.144135  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.644172  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.982150  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.481265  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:08.939357  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.940087  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.430572  446736 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.430598  446736 pod_ready.go:82] duration metric: took 7.008352985s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.430610  446736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436673  446736 pod_ready.go:93] pod "etcd-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.436699  446736 pod_ready.go:82] duration metric: took 6.082545ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436711  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442262  446736 pod_ready.go:93] pod "kube-apiserver-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.442282  446736 pod_ready.go:82] duration metric: took 5.563816ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442292  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446170  446736 pod_ready.go:93] pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.446189  446736 pod_ready.go:82] duration metric: took 3.890123ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446198  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450190  446736 pod_ready.go:93] pod "kube-proxy-fxqqc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.450216  446736 pod_ready.go:82] duration metric: took 4.011125ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450226  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826537  446736 pod_ready.go:93] pod "kube-scheduler-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.826572  446736 pod_ready.go:82] duration metric: took 376.338504ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826587  446736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:12.834756  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.144391  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.644441  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.143916  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.644779  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.644634  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.144050  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.644738  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:16.143957  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:16.144037  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:16.184282  447486 cri.go:89] found id: ""
	I1030 19:47:16.184310  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.184320  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:16.184327  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:16.184403  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:16.225359  447486 cri.go:89] found id: ""
	I1030 19:47:16.225388  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.225397  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:16.225403  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:16.225471  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:16.260591  447486 cri.go:89] found id: ""
	I1030 19:47:16.260625  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.260635  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:16.260641  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:16.260695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:16.299562  447486 cri.go:89] found id: ""
	I1030 19:47:16.299591  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.299602  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:16.299609  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:16.299682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:16.334753  447486 cri.go:89] found id: ""
	I1030 19:47:16.334781  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.334789  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:16.334795  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:16.334877  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:16.371588  447486 cri.go:89] found id: ""
	I1030 19:47:16.371619  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.371628  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:16.371634  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:16.371689  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:16.406668  447486 cri.go:89] found id: ""
	I1030 19:47:16.406699  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.406710  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:16.406718  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:16.406786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:16.443050  447486 cri.go:89] found id: ""
	I1030 19:47:16.443081  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.443096  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:16.443109  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:16.443125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:16.492898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:16.492936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:16.506310  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:16.506343  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:16.637629  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:16.637660  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:16.637677  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:16.709581  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:16.709621  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:14.481660  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:16.981807  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:13.438777  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.439457  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.939606  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.335280  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.833216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.833320  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.253501  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:19.267200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:19.267276  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:19.303608  447486 cri.go:89] found id: ""
	I1030 19:47:19.303641  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.303651  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:19.303658  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:19.303711  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:19.341311  447486 cri.go:89] found id: ""
	I1030 19:47:19.341343  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.341354  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:19.341363  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:19.341427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:19.376949  447486 cri.go:89] found id: ""
	I1030 19:47:19.376977  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.376987  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:19.376996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:19.377075  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:19.414164  447486 cri.go:89] found id: ""
	I1030 19:47:19.414197  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.414209  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:19.414218  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:19.414308  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:19.450637  447486 cri.go:89] found id: ""
	I1030 19:47:19.450671  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.450683  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:19.450692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:19.450761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:19.485315  447486 cri.go:89] found id: ""
	I1030 19:47:19.485345  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.485355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:19.485364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:19.485427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:19.519873  447486 cri.go:89] found id: ""
	I1030 19:47:19.519901  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.519911  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:19.519919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:19.519982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:19.555168  447486 cri.go:89] found id: ""
	I1030 19:47:19.555198  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.555211  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:19.555223  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:19.555239  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:19.607227  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:19.607265  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:19.621465  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:19.621498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:19.700837  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:19.700869  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:19.700882  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:19.774428  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:19.774468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:18.982345  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:21.482165  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.940122  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.439405  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.333449  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.833942  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.314410  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:22.327998  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:22.328083  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:22.365583  447486 cri.go:89] found id: ""
	I1030 19:47:22.365611  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.365622  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:22.365631  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:22.365694  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:22.398964  447486 cri.go:89] found id: ""
	I1030 19:47:22.398996  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.399008  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:22.399016  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:22.399092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:22.435132  447486 cri.go:89] found id: ""
	I1030 19:47:22.435169  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.435181  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:22.435188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:22.435252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:22.471510  447486 cri.go:89] found id: ""
	I1030 19:47:22.471544  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.471557  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:22.471574  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:22.471630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:22.509611  447486 cri.go:89] found id: ""
	I1030 19:47:22.509639  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.509647  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:22.509653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:22.509707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:22.546502  447486 cri.go:89] found id: ""
	I1030 19:47:22.546539  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.546552  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:22.546560  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:22.546630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:22.584560  447486 cri.go:89] found id: ""
	I1030 19:47:22.584593  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.584605  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:22.584613  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:22.584676  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:22.621421  447486 cri.go:89] found id: ""
	I1030 19:47:22.621461  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.621474  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:22.621486  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:22.621505  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:22.634998  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:22.635038  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:22.711002  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:22.711028  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:22.711047  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:22.790673  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:22.790712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.831804  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:22.831851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.386915  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:25.399854  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:25.399954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:25.438346  447486 cri.go:89] found id: ""
	I1030 19:47:25.438381  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.438406  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:25.438416  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:25.438500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:25.474888  447486 cri.go:89] found id: ""
	I1030 19:47:25.474915  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.474924  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:25.474931  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:25.474994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:25.511925  447486 cri.go:89] found id: ""
	I1030 19:47:25.511955  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.511966  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:25.511973  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:25.512038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:25.551027  447486 cri.go:89] found id: ""
	I1030 19:47:25.551058  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.551067  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:25.551073  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:25.551144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:25.584736  447486 cri.go:89] found id: ""
	I1030 19:47:25.584764  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.584773  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:25.584779  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:25.584847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:25.632765  447486 cri.go:89] found id: ""
	I1030 19:47:25.632798  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.632810  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:25.632818  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:25.632893  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:25.682501  447486 cri.go:89] found id: ""
	I1030 19:47:25.682528  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.682536  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:25.682543  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:25.682591  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:25.728306  447486 cri.go:89] found id: ""
	I1030 19:47:25.728340  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.728352  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:25.728365  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:25.728397  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.781908  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:25.781944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:25.795864  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:25.795899  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:25.868350  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:25.868378  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:25.868392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:25.944244  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:25.944277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:23.981016  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:25.982186  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.942113  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.438568  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.333623  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.334460  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:28.488216  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:28.501481  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:28.501558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:28.536808  447486 cri.go:89] found id: ""
	I1030 19:47:28.536838  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.536849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:28.536857  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:28.536923  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:28.571819  447486 cri.go:89] found id: ""
	I1030 19:47:28.571855  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.571867  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:28.571885  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:28.571966  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:28.605532  447486 cri.go:89] found id: ""
	I1030 19:47:28.605571  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.605582  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:28.605610  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:28.605682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:28.642108  447486 cri.go:89] found id: ""
	I1030 19:47:28.642140  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.642152  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:28.642159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:28.642234  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:28.680036  447486 cri.go:89] found id: ""
	I1030 19:47:28.680065  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.680078  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:28.680086  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:28.680151  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.716135  447486 cri.go:89] found id: ""
	I1030 19:47:28.716162  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.716171  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:28.716177  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:28.716238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:28.752364  447486 cri.go:89] found id: ""
	I1030 19:47:28.752398  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.752406  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:28.752413  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:28.752478  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:28.788396  447486 cri.go:89] found id: ""
	I1030 19:47:28.788434  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.788447  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:28.788461  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:28.788476  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:28.841560  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:28.841595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:28.856134  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:28.856178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:28.930463  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:28.930507  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:28.930527  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:29.013743  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:29.013795  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:31.557942  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:31.573562  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:31.573654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:31.625349  447486 cri.go:89] found id: ""
	I1030 19:47:31.625378  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.625386  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:31.625392  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:31.625442  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:31.689536  447486 cri.go:89] found id: ""
	I1030 19:47:31.689566  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.689574  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:31.689581  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:31.689632  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:31.723758  447486 cri.go:89] found id: ""
	I1030 19:47:31.723794  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.723806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:31.723814  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:31.723890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:31.762671  447486 cri.go:89] found id: ""
	I1030 19:47:31.762698  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.762707  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:31.762713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:31.762761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:31.797658  447486 cri.go:89] found id: ""
	I1030 19:47:31.797686  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.797694  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:31.797702  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:31.797792  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.481158  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:30.981477  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:32.981593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.940019  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.833540  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.334678  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.832186  447486 cri.go:89] found id: ""
	I1030 19:47:31.832217  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.832228  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:31.832236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:31.832298  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:31.866820  447486 cri.go:89] found id: ""
	I1030 19:47:31.866853  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.866866  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:31.866875  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:31.866937  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:31.901888  447486 cri.go:89] found id: ""
	I1030 19:47:31.901913  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.901922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:31.901932  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:31.901944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:31.992343  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:31.992380  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:32.030519  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:32.030559  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:32.084442  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:32.084478  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:32.098919  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:32.098954  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:32.171034  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:34.671243  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:34.685879  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:34.685972  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:34.720657  447486 cri.go:89] found id: ""
	I1030 19:47:34.720686  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.720694  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:34.720700  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:34.720757  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:34.759571  447486 cri.go:89] found id: ""
	I1030 19:47:34.759602  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.759615  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:34.759624  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:34.759685  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:34.795273  447486 cri.go:89] found id: ""
	I1030 19:47:34.795313  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.795322  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:34.795329  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:34.795450  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:34.828999  447486 cri.go:89] found id: ""
	I1030 19:47:34.829035  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.829047  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:34.829054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:34.829158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:34.865620  447486 cri.go:89] found id: ""
	I1030 19:47:34.865661  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.865674  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:34.865682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:34.865753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:34.900768  447486 cri.go:89] found id: ""
	I1030 19:47:34.900801  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.900812  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:34.900820  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:34.900889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:34.945023  447486 cri.go:89] found id: ""
	I1030 19:47:34.945048  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.945057  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:34.945063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:34.945118  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:34.980458  447486 cri.go:89] found id: ""
	I1030 19:47:34.980483  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.980492  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:34.980501  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:34.980514  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:35.052570  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:35.052597  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:35.052613  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:35.133825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:35.133869  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:35.176016  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:35.176063  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:35.228866  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:35.228903  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:34.982702  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.481103  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.438712  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.938856  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.837275  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:39.332612  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.743408  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:37.757472  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:37.757547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:37.794818  447486 cri.go:89] found id: ""
	I1030 19:47:37.794847  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.794856  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:37.794862  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:37.794928  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:37.830025  447486 cri.go:89] found id: ""
	I1030 19:47:37.830064  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.830077  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:37.830086  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:37.830150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:37.864862  447486 cri.go:89] found id: ""
	I1030 19:47:37.864893  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.864902  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:37.864908  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:37.864958  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:37.901650  447486 cri.go:89] found id: ""
	I1030 19:47:37.901699  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.901713  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:37.901722  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:37.901780  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:37.935824  447486 cri.go:89] found id: ""
	I1030 19:47:37.935854  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.935862  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:37.935868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:37.935930  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:37.972774  447486 cri.go:89] found id: ""
	I1030 19:47:37.972805  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.972813  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:37.972819  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:37.972868  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:38.007815  447486 cri.go:89] found id: ""
	I1030 19:47:38.007845  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.007856  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:38.007864  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:38.007931  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:38.042525  447486 cri.go:89] found id: ""
	I1030 19:47:38.042559  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.042571  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:38.042584  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:38.042600  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:38.122022  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:38.122048  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:38.122065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:38.200534  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:38.200575  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:38.240118  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:38.240154  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:38.291936  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:38.291976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:40.806105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:40.821268  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:40.821343  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:40.857151  447486 cri.go:89] found id: ""
	I1030 19:47:40.857186  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.857198  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:40.857207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:40.857266  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:40.893603  447486 cri.go:89] found id: ""
	I1030 19:47:40.893639  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.893648  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:40.893654  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:40.893720  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:40.935294  447486 cri.go:89] found id: ""
	I1030 19:47:40.935330  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.935342  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:40.935349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:40.935418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:40.971509  447486 cri.go:89] found id: ""
	I1030 19:47:40.971536  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.971544  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:40.971550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:40.971610  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:41.009895  447486 cri.go:89] found id: ""
	I1030 19:47:41.009932  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.009941  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:41.009948  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:41.010008  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:41.045170  447486 cri.go:89] found id: ""
	I1030 19:47:41.045208  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.045221  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:41.045229  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:41.045288  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:41.077654  447486 cri.go:89] found id: ""
	I1030 19:47:41.077684  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.077695  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:41.077704  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:41.077771  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:41.111509  447486 cri.go:89] found id: ""
	I1030 19:47:41.111543  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.111552  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:41.111562  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:41.111574  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:41.164939  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:41.164976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:41.178512  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:41.178589  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:41.258783  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:41.258813  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:41.258832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:41.338192  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:41.338230  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:39.481210  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.481439  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:38.938987  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:40.941386  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.333705  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.833502  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.878155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:43.892376  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:43.892452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:43.930556  447486 cri.go:89] found id: ""
	I1030 19:47:43.930594  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.930606  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:43.930614  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:43.930679  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:43.970588  447486 cri.go:89] found id: ""
	I1030 19:47:43.970619  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.970630  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:43.970638  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:43.970706  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:44.005467  447486 cri.go:89] found id: ""
	I1030 19:47:44.005497  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.005506  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:44.005512  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:44.005573  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:44.039126  447486 cri.go:89] found id: ""
	I1030 19:47:44.039164  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.039173  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:44.039179  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:44.039239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:44.072961  447486 cri.go:89] found id: ""
	I1030 19:47:44.072994  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.073006  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:44.073014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:44.073109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:44.105864  447486 cri.go:89] found id: ""
	I1030 19:47:44.105891  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.105900  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:44.105907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:44.105956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:44.138198  447486 cri.go:89] found id: ""
	I1030 19:47:44.138240  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.138250  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:44.138264  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:44.138331  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:44.172529  447486 cri.go:89] found id: ""
	I1030 19:47:44.172558  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.172567  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:44.172577  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:44.172594  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:44.248215  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:44.248254  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:44.286169  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:44.286202  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:44.341129  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:44.341167  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:44.354570  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:44.354597  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:44.427790  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:43.481483  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.482271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.981312  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.440759  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.938783  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.940512  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.332448  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:48.333216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.928728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:46.943068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:46.943154  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:46.978385  447486 cri.go:89] found id: ""
	I1030 19:47:46.978416  447486 logs.go:282] 0 containers: []
	W1030 19:47:46.978428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:46.978436  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:46.978522  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:47.020413  447486 cri.go:89] found id: ""
	I1030 19:47:47.020457  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.020469  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:47.020476  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:47.020547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:47.061492  447486 cri.go:89] found id: ""
	I1030 19:47:47.061526  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.061538  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:47.061547  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:47.061611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:47.097621  447486 cri.go:89] found id: ""
	I1030 19:47:47.097659  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.097670  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:47.097679  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:47.097739  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:47.131740  447486 cri.go:89] found id: ""
	I1030 19:47:47.131769  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.131779  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:47.131785  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:47.131856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:47.167623  447486 cri.go:89] found id: ""
	I1030 19:47:47.167661  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.167674  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:47.167682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:47.167746  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:47.202299  447486 cri.go:89] found id: ""
	I1030 19:47:47.202328  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.202337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:47.202344  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:47.202401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:47.236652  447486 cri.go:89] found id: ""
	I1030 19:47:47.236686  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.236695  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:47.236704  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:47.236716  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:47.289700  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:47.289740  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:47.304929  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:47.304964  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:47.374811  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:47.374842  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:47.374858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:47.449161  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:47.449196  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:49.989730  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:50.002741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:50.002821  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:50.037602  447486 cri.go:89] found id: ""
	I1030 19:47:50.037636  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.037647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:50.037655  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:50.037724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:50.071346  447486 cri.go:89] found id: ""
	I1030 19:47:50.071383  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.071395  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:50.071405  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:50.071473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:50.106657  447486 cri.go:89] found id: ""
	I1030 19:47:50.106698  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.106711  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:50.106719  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:50.106783  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:50.140974  447486 cri.go:89] found id: ""
	I1030 19:47:50.141012  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.141025  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:50.141032  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:50.141105  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:50.177715  447486 cri.go:89] found id: ""
	I1030 19:47:50.177748  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.177756  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:50.177763  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:50.177824  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:50.212234  447486 cri.go:89] found id: ""
	I1030 19:47:50.212263  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.212272  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:50.212278  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:50.212337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:50.250791  447486 cri.go:89] found id: ""
	I1030 19:47:50.250826  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.250835  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:50.250842  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:50.250908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:50.288575  447486 cri.go:89] found id: ""
	I1030 19:47:50.288607  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.288615  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:50.288628  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:50.288643  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:50.358015  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:50.358039  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:50.358054  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:50.433194  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:50.433235  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:50.473485  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:50.473519  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:50.523581  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:50.523618  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:49.981614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:51.982079  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.439717  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.940170  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.333498  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.832848  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:54.833689  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:53.038393  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:53.052835  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:53.052910  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:53.088797  447486 cri.go:89] found id: ""
	I1030 19:47:53.088828  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.088837  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:53.088843  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:53.088897  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:53.124627  447486 cri.go:89] found id: ""
	I1030 19:47:53.124659  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.124668  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:53.124674  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:53.124724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:53.159127  447486 cri.go:89] found id: ""
	I1030 19:47:53.159163  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.159175  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:53.159183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:53.159244  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:53.191770  447486 cri.go:89] found id: ""
	I1030 19:47:53.191801  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.191810  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:53.191817  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:53.191885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:53.227727  447486 cri.go:89] found id: ""
	I1030 19:47:53.227761  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.227774  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:53.227781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:53.227842  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:53.262937  447486 cri.go:89] found id: ""
	I1030 19:47:53.262969  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.262981  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:53.262989  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:53.263060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:53.296070  447486 cri.go:89] found id: ""
	I1030 19:47:53.296113  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.296124  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:53.296133  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:53.296197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:53.332628  447486 cri.go:89] found id: ""
	I1030 19:47:53.332663  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.332674  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:53.332687  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:53.332702  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:53.385004  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:53.385046  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.400139  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:53.400185  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:53.477792  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:53.477826  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:53.477858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:53.553145  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:53.553186  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:56.094454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:56.107827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:56.107900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:56.141701  447486 cri.go:89] found id: ""
	I1030 19:47:56.141739  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.141751  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:56.141763  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:56.141831  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:56.179973  447486 cri.go:89] found id: ""
	I1030 19:47:56.180003  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.180016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:56.180023  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:56.180099  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:56.220456  447486 cri.go:89] found id: ""
	I1030 19:47:56.220486  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.220496  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:56.220503  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:56.220578  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:56.259699  447486 cri.go:89] found id: ""
	I1030 19:47:56.259727  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.259736  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:56.259741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:56.259791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:56.302726  447486 cri.go:89] found id: ""
	I1030 19:47:56.302762  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.302775  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:56.302783  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:56.302850  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:56.339791  447486 cri.go:89] found id: ""
	I1030 19:47:56.339819  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.339828  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:56.339834  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:56.339889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:56.381291  447486 cri.go:89] found id: ""
	I1030 19:47:56.381325  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.381337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:56.381345  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:56.381401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:56.417150  447486 cri.go:89] found id: ""
	I1030 19:47:56.417182  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.417194  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:56.417207  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:56.417227  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:56.466963  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:56.467005  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:56.481528  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:56.481557  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:56.554843  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:56.554872  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:56.554887  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:56.635798  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:56.635846  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:54.480601  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:56.481475  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:55.439618  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.940438  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.337314  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.179829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:59.193083  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:59.193160  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:59.231253  447486 cri.go:89] found id: ""
	I1030 19:47:59.231288  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.231302  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:59.231311  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:59.231382  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:59.265982  447486 cri.go:89] found id: ""
	I1030 19:47:59.266013  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.266022  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:59.266028  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:59.266090  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:59.303724  447486 cri.go:89] found id: ""
	I1030 19:47:59.303761  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.303773  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:59.303781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:59.303848  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:59.342137  447486 cri.go:89] found id: ""
	I1030 19:47:59.342163  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.342172  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:59.342180  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:59.342246  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:59.382652  447486 cri.go:89] found id: ""
	I1030 19:47:59.382684  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.382693  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:59.382700  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:59.382761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:59.422428  447486 cri.go:89] found id: ""
	I1030 19:47:59.422454  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.422463  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:59.422469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:59.422539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:59.464047  447486 cri.go:89] found id: ""
	I1030 19:47:59.464079  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.464089  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:59.464095  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:59.464146  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:59.500658  447486 cri.go:89] found id: ""
	I1030 19:47:59.500693  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.500705  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:59.500716  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:59.500732  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:59.554634  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:59.554679  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:59.567956  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:59.567986  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:59.646305  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:59.646332  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:59.646349  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:59.730008  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:59.730052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:58.486516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.982184  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.439220  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.439945  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:01.832883  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:03.834027  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.274141  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:02.287246  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:02.287320  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:02.322166  447486 cri.go:89] found id: ""
	I1030 19:48:02.322320  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.322336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:02.322346  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:02.322421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:02.358101  447486 cri.go:89] found id: ""
	I1030 19:48:02.358131  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.358140  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:02.358146  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:02.358209  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:02.394812  447486 cri.go:89] found id: ""
	I1030 19:48:02.394898  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.394915  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:02.394924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:02.394990  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:02.429128  447486 cri.go:89] found id: ""
	I1030 19:48:02.429165  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.429177  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:02.429186  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:02.429358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:02.465878  447486 cri.go:89] found id: ""
	I1030 19:48:02.465907  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.465915  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:02.465921  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:02.465973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:02.502758  447486 cri.go:89] found id: ""
	I1030 19:48:02.502794  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.502805  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:02.502813  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:02.502879  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:02.540111  447486 cri.go:89] found id: ""
	I1030 19:48:02.540142  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.540152  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:02.540158  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:02.540222  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:02.574728  447486 cri.go:89] found id: ""
	I1030 19:48:02.574762  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.574774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:02.574787  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:02.574804  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.613333  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:02.613374  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:02.664970  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:02.665013  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:02.679594  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:02.679626  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:02.744184  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:02.744208  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:02.744222  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.326826  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:05.340166  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:05.340232  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:05.376742  447486 cri.go:89] found id: ""
	I1030 19:48:05.376774  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.376789  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:05.376795  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:05.376865  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:05.413981  447486 cri.go:89] found id: ""
	I1030 19:48:05.414026  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.414039  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:05.414047  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:05.414121  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:05.449811  447486 cri.go:89] found id: ""
	I1030 19:48:05.449842  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.449854  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:05.449862  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:05.449925  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:05.502576  447486 cri.go:89] found id: ""
	I1030 19:48:05.502610  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.502622  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:05.502630  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:05.502721  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:05.536747  447486 cri.go:89] found id: ""
	I1030 19:48:05.536778  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.536787  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:05.536793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:05.536857  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:05.570308  447486 cri.go:89] found id: ""
	I1030 19:48:05.570335  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.570344  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:05.570353  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:05.570420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:05.605006  447486 cri.go:89] found id: ""
	I1030 19:48:05.605037  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.605048  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:05.605054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:05.605109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:05.638651  447486 cri.go:89] found id: ""
	I1030 19:48:05.638681  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.638693  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:05.638705  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:05.638720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:05.690734  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:05.690769  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:05.704561  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:05.704588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:05.779426  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:05.779448  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:05.779471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.866320  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:05.866355  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:03.481614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:05.482428  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.981875  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:04.939485  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.438925  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:06.334094  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.834525  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.409454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:08.423687  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:08.423767  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:08.463554  447486 cri.go:89] found id: ""
	I1030 19:48:08.463581  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.463591  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:08.463597  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:08.463654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:08.500159  447486 cri.go:89] found id: ""
	I1030 19:48:08.500186  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.500195  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:08.500200  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:08.500253  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:08.535670  447486 cri.go:89] found id: ""
	I1030 19:48:08.535701  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.535710  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:08.535717  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:08.535785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:08.572921  447486 cri.go:89] found id: ""
	I1030 19:48:08.572958  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.572968  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:08.572975  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:08.573052  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:08.610873  447486 cri.go:89] found id: ""
	I1030 19:48:08.610908  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.610918  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:08.610924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:08.610978  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:08.645430  447486 cri.go:89] found id: ""
	I1030 19:48:08.645458  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.645466  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:08.645475  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:08.645528  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:08.681212  447486 cri.go:89] found id: ""
	I1030 19:48:08.681246  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.681258  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:08.681266  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:08.681332  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:08.716619  447486 cri.go:89] found id: ""
	I1030 19:48:08.716651  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.716661  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:08.716671  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:08.716682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:08.794090  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:08.794134  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.833209  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:08.833251  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:08.884781  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:08.884817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:08.898556  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:08.898586  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:08.967713  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.468230  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:11.482593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:11.482660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:11.518191  447486 cri.go:89] found id: ""
	I1030 19:48:11.518225  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.518235  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:11.518242  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:11.518295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:11.557199  447486 cri.go:89] found id: ""
	I1030 19:48:11.557229  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.557237  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:11.557252  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:11.557323  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:11.595605  447486 cri.go:89] found id: ""
	I1030 19:48:11.595638  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.595650  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:11.595664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:11.595732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:11.634253  447486 cri.go:89] found id: ""
	I1030 19:48:11.634281  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.634295  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:11.634301  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:11.634358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:11.671138  447486 cri.go:89] found id: ""
	I1030 19:48:11.671167  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.671176  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:11.671183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:11.671238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:11.707202  447486 cri.go:89] found id: ""
	I1030 19:48:11.707228  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.707237  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:11.707243  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:11.707302  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:11.745514  447486 cri.go:89] found id: ""
	I1030 19:48:11.745549  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.745561  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:11.745570  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:11.745640  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:11.781403  447486 cri.go:89] found id: ""
	I1030 19:48:11.781438  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.781449  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:11.781458  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:11.781471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:10.486349  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:12.980881  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:09.440261  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.938439  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.332911  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.334382  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.832934  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:11.832972  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:11.853498  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:11.853545  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:11.949365  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.949389  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:11.949405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:12.033776  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:12.033823  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.579536  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:14.593497  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:14.593579  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:14.627853  447486 cri.go:89] found id: ""
	I1030 19:48:14.627886  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.627895  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:14.627902  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:14.627953  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:14.662356  447486 cri.go:89] found id: ""
	I1030 19:48:14.662386  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.662398  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:14.662406  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:14.662481  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:14.699334  447486 cri.go:89] found id: ""
	I1030 19:48:14.699370  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.699382  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:14.699390  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:14.699457  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:14.733884  447486 cri.go:89] found id: ""
	I1030 19:48:14.733924  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.733937  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:14.733946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:14.734025  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:14.775208  447486 cri.go:89] found id: ""
	I1030 19:48:14.775240  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.775249  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:14.775256  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:14.775315  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:14.809663  447486 cri.go:89] found id: ""
	I1030 19:48:14.809695  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.809704  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:14.809711  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:14.809778  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:14.844963  447486 cri.go:89] found id: ""
	I1030 19:48:14.844996  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.845006  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:14.845014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:14.845084  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:14.881236  447486 cri.go:89] found id: ""
	I1030 19:48:14.881273  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.881283  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:14.881293  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:14.881305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:14.933792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:14.933830  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:14.948038  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:14.948065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:15.023497  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:15.023519  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:15.023532  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:15.105682  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:15.105741  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.980949  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.981063  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.940399  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.438545  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:15.834158  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.332452  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:17.646238  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:17.665366  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:17.665455  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:17.707729  447486 cri.go:89] found id: ""
	I1030 19:48:17.707783  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.707796  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:17.707805  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:17.707883  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:17.759922  447486 cri.go:89] found id: ""
	I1030 19:48:17.759959  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.759972  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:17.759980  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:17.760049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:17.807635  447486 cri.go:89] found id: ""
	I1030 19:48:17.807671  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.807683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:17.807695  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:17.807770  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:17.844205  447486 cri.go:89] found id: ""
	I1030 19:48:17.844236  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.844247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:17.844255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:17.844326  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:17.879079  447486 cri.go:89] found id: ""
	I1030 19:48:17.879113  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.879125  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:17.879134  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:17.879202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:17.916548  447486 cri.go:89] found id: ""
	I1030 19:48:17.916584  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.916594  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:17.916601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:17.916654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:17.950597  447486 cri.go:89] found id: ""
	I1030 19:48:17.950626  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.950635  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:17.950640  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:17.950695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:17.985924  447486 cri.go:89] found id: ""
	I1030 19:48:17.985957  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.985968  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:17.985980  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:17.985996  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:18.066211  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:18.066250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:18.107228  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:18.107279  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:18.157508  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:18.157543  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.172208  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:18.172243  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:18.248100  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:20.748681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:20.763369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:20.763445  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:20.804288  447486 cri.go:89] found id: ""
	I1030 19:48:20.804323  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.804336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:20.804343  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:20.804410  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:20.838925  447486 cri.go:89] found id: ""
	I1030 19:48:20.838964  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.838973  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:20.838979  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:20.839030  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:20.873560  447486 cri.go:89] found id: ""
	I1030 19:48:20.873596  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.873608  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:20.873617  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:20.873681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:20.908670  447486 cri.go:89] found id: ""
	I1030 19:48:20.908705  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.908716  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:20.908723  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:20.908791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:20.945901  447486 cri.go:89] found id: ""
	I1030 19:48:20.945929  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.945937  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:20.945943  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:20.945991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:20.980184  447486 cri.go:89] found id: ""
	I1030 19:48:20.980216  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.980227  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:20.980236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:20.980299  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:21.024243  447486 cri.go:89] found id: ""
	I1030 19:48:21.024272  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.024284  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:21.024293  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:21.024366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:21.063315  447486 cri.go:89] found id: ""
	I1030 19:48:21.063348  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.063358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:21.063370  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:21.063387  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:21.130434  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:21.130463  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:21.130480  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:21.209067  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:21.209107  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:21.251005  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:21.251035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:21.303365  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:21.303402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.981952  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.982372  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.439921  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.939869  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.940058  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.333700  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.833845  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.834560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:23.817700  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:23.831060  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:23.831133  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:23.864299  447486 cri.go:89] found id: ""
	I1030 19:48:23.864334  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.864346  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:23.864354  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:23.864420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:23.900815  447486 cri.go:89] found id: ""
	I1030 19:48:23.900844  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.900854  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:23.900869  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:23.900929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:23.939888  447486 cri.go:89] found id: ""
	I1030 19:48:23.939917  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.939928  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:23.939936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:23.939999  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:23.975359  447486 cri.go:89] found id: ""
	I1030 19:48:23.975387  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.975395  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:23.975401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:23.975452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:24.012779  447486 cri.go:89] found id: ""
	I1030 19:48:24.012819  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.012832  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:24.012840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:24.012908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:24.048853  447486 cri.go:89] found id: ""
	I1030 19:48:24.048890  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.048903  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:24.048912  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:24.048979  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:24.084744  447486 cri.go:89] found id: ""
	I1030 19:48:24.084784  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.084797  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:24.084806  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:24.084860  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:24.121719  447486 cri.go:89] found id: ""
	I1030 19:48:24.121757  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.121767  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:24.121777  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:24.121791  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:24.178691  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:24.178733  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:24.192885  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:24.192916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:24.268771  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:24.268815  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:24.268832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:24.349663  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:24.349699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:23.481516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:25.481700  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.481886  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.940106  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.940309  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.334165  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.834162  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.887325  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:26.900480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:26.900558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:26.936157  447486 cri.go:89] found id: ""
	I1030 19:48:26.936188  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.936200  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:26.936207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:26.936278  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:26.975580  447486 cri.go:89] found id: ""
	I1030 19:48:26.975615  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.975626  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:26.975633  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:26.975705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:27.010549  447486 cri.go:89] found id: ""
	I1030 19:48:27.010579  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.010592  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:27.010600  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:27.010659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:27.047505  447486 cri.go:89] found id: ""
	I1030 19:48:27.047541  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.047553  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:27.047561  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:27.047628  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:27.083379  447486 cri.go:89] found id: ""
	I1030 19:48:27.083409  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.083420  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:27.083429  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:27.083492  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:27.117912  447486 cri.go:89] found id: ""
	I1030 19:48:27.117954  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.117967  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:27.117976  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:27.118049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:27.151721  447486 cri.go:89] found id: ""
	I1030 19:48:27.151749  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.151758  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:27.151765  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:27.151817  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:27.188940  447486 cri.go:89] found id: ""
	I1030 19:48:27.188981  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.188989  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:27.188999  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:27.189011  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:27.243926  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:27.243960  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:27.258702  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:27.258731  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:27.326983  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:27.327023  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:27.327041  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:27.410761  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:27.410808  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.953219  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:29.967972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:29.968078  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:30.003975  447486 cri.go:89] found id: ""
	I1030 19:48:30.004004  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.004014  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:30.004023  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:30.004097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:30.041732  447486 cri.go:89] found id: ""
	I1030 19:48:30.041768  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.041780  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:30.041787  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:30.041863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:30.078262  447486 cri.go:89] found id: ""
	I1030 19:48:30.078297  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.078308  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:30.078315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:30.078379  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:30.116100  447486 cri.go:89] found id: ""
	I1030 19:48:30.116137  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.116149  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:30.116157  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:30.116229  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:30.150925  447486 cri.go:89] found id: ""
	I1030 19:48:30.150953  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.150964  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:30.150972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:30.151041  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:30.192188  447486 cri.go:89] found id: ""
	I1030 19:48:30.192219  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.192230  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:30.192237  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:30.192314  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:30.231144  447486 cri.go:89] found id: ""
	I1030 19:48:30.231180  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.231192  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:30.231200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:30.231277  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:30.271198  447486 cri.go:89] found id: ""
	I1030 19:48:30.271228  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.271242  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:30.271265  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:30.271277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:30.322750  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:30.322792  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:30.337745  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:30.337774  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:30.417198  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:30.417224  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:30.417240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:30.503327  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:30.503364  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.982893  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.482051  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.440509  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:31.939517  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.333571  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.833482  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:33.047719  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:33.062330  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:33.062395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:33.101049  447486 cri.go:89] found id: ""
	I1030 19:48:33.101088  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.101101  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:33.101108  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:33.101175  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:33.135236  447486 cri.go:89] found id: ""
	I1030 19:48:33.135268  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.135279  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:33.135286  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:33.135357  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:33.169279  447486 cri.go:89] found id: ""
	I1030 19:48:33.169314  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.169325  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:33.169333  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:33.169401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:33.203336  447486 cri.go:89] found id: ""
	I1030 19:48:33.203380  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.203392  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:33.203401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:33.203470  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:33.238223  447486 cri.go:89] found id: ""
	I1030 19:48:33.238258  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.238270  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:33.238279  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:33.238345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:33.272891  447486 cri.go:89] found id: ""
	I1030 19:48:33.272925  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.272937  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:33.272946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:33.273014  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:33.312452  447486 cri.go:89] found id: ""
	I1030 19:48:33.312480  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.312489  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:33.312496  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:33.312547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:33.349041  447486 cri.go:89] found id: ""
	I1030 19:48:33.349076  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.349091  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:33.349104  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:33.349130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:33.430888  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:33.430940  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.469414  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:33.469444  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:33.518989  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:33.519022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:33.532656  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:33.532690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:33.605896  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.106207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:36.120564  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:36.120646  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:36.156854  447486 cri.go:89] found id: ""
	I1030 19:48:36.156887  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.156900  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:36.156909  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:36.156988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:36.195027  447486 cri.go:89] found id: ""
	I1030 19:48:36.195059  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.195072  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:36.195080  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:36.195150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:36.235639  447486 cri.go:89] found id: ""
	I1030 19:48:36.235672  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.235683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:36.235692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:36.235758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:36.281659  447486 cri.go:89] found id: ""
	I1030 19:48:36.281693  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.281702  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:36.281709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:36.281762  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:36.315427  447486 cri.go:89] found id: ""
	I1030 19:48:36.315454  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.315463  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:36.315469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:36.315531  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:36.353084  447486 cri.go:89] found id: ""
	I1030 19:48:36.353110  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.353120  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:36.353126  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:36.353197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:36.388497  447486 cri.go:89] found id: ""
	I1030 19:48:36.388533  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.388545  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:36.388553  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:36.388616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:36.423625  447486 cri.go:89] found id: ""
	I1030 19:48:36.423658  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.423667  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:36.423676  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:36.423691  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:36.476722  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:36.476757  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:36.490669  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:36.490700  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:36.558587  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.558621  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:36.558639  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:36.635606  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:36.635654  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:34.482414  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.981552  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.439796  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.938335  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:37.333231  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.333707  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.174007  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:39.187709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:39.187786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:39.226131  447486 cri.go:89] found id: ""
	I1030 19:48:39.226165  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.226177  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:39.226185  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:39.226265  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:39.265963  447486 cri.go:89] found id: ""
	I1030 19:48:39.266003  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.266016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:39.266024  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:39.266092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:39.302586  447486 cri.go:89] found id: ""
	I1030 19:48:39.302624  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.302637  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:39.302645  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:39.302710  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:39.347869  447486 cri.go:89] found id: ""
	I1030 19:48:39.347903  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.347916  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:39.347924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:39.347994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:39.384252  447486 cri.go:89] found id: ""
	I1030 19:48:39.384280  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.384288  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:39.384294  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:39.384347  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:39.418847  447486 cri.go:89] found id: ""
	I1030 19:48:39.418876  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.418885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:39.418891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:39.418950  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:39.458408  447486 cri.go:89] found id: ""
	I1030 19:48:39.458454  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.458467  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:39.458480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:39.458567  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:39.493889  447486 cri.go:89] found id: ""
	I1030 19:48:39.493923  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.493934  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:39.493946  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:39.493959  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:39.548692  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:39.548746  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:39.562083  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:39.562110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:39.633822  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:39.633845  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:39.633857  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:39.711765  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:39.711814  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:39.482010  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.981380  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:38.939254  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:40.940318  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.832456  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.832780  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:42.254337  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:42.268137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:42.268202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:42.303383  447486 cri.go:89] found id: ""
	I1030 19:48:42.303418  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.303428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:42.303434  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:42.303501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:42.349405  447486 cri.go:89] found id: ""
	I1030 19:48:42.349437  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.349447  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:42.349453  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:42.349504  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:42.384317  447486 cri.go:89] found id: ""
	I1030 19:48:42.384353  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.384363  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:42.384369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:42.384424  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:42.418712  447486 cri.go:89] found id: ""
	I1030 19:48:42.418759  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.418768  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:42.418775  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:42.418833  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:42.454234  447486 cri.go:89] found id: ""
	I1030 19:48:42.454270  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.454280  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:42.454288  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:42.454362  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:42.488813  447486 cri.go:89] found id: ""
	I1030 19:48:42.488845  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.488855  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:42.488863  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:42.488929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:42.525883  447486 cri.go:89] found id: ""
	I1030 19:48:42.525917  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.525929  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:42.525938  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:42.526006  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:42.561197  447486 cri.go:89] found id: ""
	I1030 19:48:42.561233  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.561246  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:42.561259  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:42.561275  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.599818  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:42.599854  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:42.654341  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:42.654382  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:42.668163  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:42.668188  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:42.739630  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:42.739659  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:42.739671  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.316154  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:45.330372  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:45.330454  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:45.369093  447486 cri.go:89] found id: ""
	I1030 19:48:45.369125  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.369135  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:45.369141  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:45.369192  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:45.407681  447486 cri.go:89] found id: ""
	I1030 19:48:45.407715  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.407726  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:45.407732  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:45.407787  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:45.444445  447486 cri.go:89] found id: ""
	I1030 19:48:45.444474  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.444482  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:45.444488  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:45.444539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:45.481538  447486 cri.go:89] found id: ""
	I1030 19:48:45.481570  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.481583  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:45.481591  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:45.481654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:45.515088  447486 cri.go:89] found id: ""
	I1030 19:48:45.515123  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.515132  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:45.515139  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:45.515195  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:45.550085  447486 cri.go:89] found id: ""
	I1030 19:48:45.550133  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.550145  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:45.550152  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:45.550214  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:45.583950  447486 cri.go:89] found id: ""
	I1030 19:48:45.583985  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.583999  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:45.584008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:45.584082  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:45.617320  447486 cri.go:89] found id: ""
	I1030 19:48:45.617349  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.617358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:45.617369  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:45.617389  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:45.668792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:45.668833  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:45.683144  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:45.683178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:45.758707  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:45.758732  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:45.758744  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.833807  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:45.833837  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:43.982806  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:46.480452  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.440702  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.938267  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:47.938396  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.833319  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.332420  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.374096  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:48.387812  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:48.387903  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:48.426958  447486 cri.go:89] found id: ""
	I1030 19:48:48.426987  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.426996  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:48.427002  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:48.427051  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:48.462216  447486 cri.go:89] found id: ""
	I1030 19:48:48.462249  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.462260  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:48.462268  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:48.462336  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:48.495666  447486 cri.go:89] found id: ""
	I1030 19:48:48.495699  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.495709  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:48.495716  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:48.495798  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:48.530653  447486 cri.go:89] found id: ""
	I1030 19:48:48.530686  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.530698  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:48.530709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:48.530777  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:48.564788  447486 cri.go:89] found id: ""
	I1030 19:48:48.564826  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.564838  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:48.564846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:48.564921  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:48.600735  447486 cri.go:89] found id: ""
	I1030 19:48:48.600772  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.600784  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:48.600793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:48.600863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:48.637063  447486 cri.go:89] found id: ""
	I1030 19:48:48.637095  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.637107  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:48.637115  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:48.637182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:48.673279  447486 cri.go:89] found id: ""
	I1030 19:48:48.673314  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.673334  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:48.673347  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:48.673362  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:48.724239  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:48.724280  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:48.738390  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:48.738425  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:48.812130  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:48.812155  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:48.812171  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:48.896253  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:48.896298  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.441155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:51.454675  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:51.454751  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:51.490464  447486 cri.go:89] found id: ""
	I1030 19:48:51.490511  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.490523  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:51.490532  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:51.490600  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:51.525364  447486 cri.go:89] found id: ""
	I1030 19:48:51.525399  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.525411  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:51.525419  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:51.525485  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:51.559028  447486 cri.go:89] found id: ""
	I1030 19:48:51.559062  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.559071  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:51.559078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:51.559139  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:51.595188  447486 cri.go:89] found id: ""
	I1030 19:48:51.595217  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.595225  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:51.595231  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:51.595300  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:51.628987  447486 cri.go:89] found id: ""
	I1030 19:48:51.629023  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.629039  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:51.629047  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:51.629119  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:51.663257  447486 cri.go:89] found id: ""
	I1030 19:48:51.663286  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.663295  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:51.663303  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:51.663368  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:51.712562  447486 cri.go:89] found id: ""
	I1030 19:48:51.712600  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.712613  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:51.712622  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:51.712684  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:51.761730  447486 cri.go:89] found id: ""
	I1030 19:48:51.761760  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.761769  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:51.761779  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:51.761794  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:51.775595  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:51.775624  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:48:48.481851  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.980723  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.982177  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:49.939273  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:51.939972  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.333451  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.333773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:54.835087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:48:51.849120  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:51.849144  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:51.849157  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:51.931364  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:51.931403  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.971195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:51.971229  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:54.525136  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:54.539137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:54.539227  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:54.574281  447486 cri.go:89] found id: ""
	I1030 19:48:54.574316  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.574339  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:54.574348  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:54.574420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:54.611109  447486 cri.go:89] found id: ""
	I1030 19:48:54.611149  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.611161  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:54.611170  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:54.611230  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:54.648396  447486 cri.go:89] found id: ""
	I1030 19:48:54.648428  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.648439  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:54.648447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:54.648510  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:54.683834  447486 cri.go:89] found id: ""
	I1030 19:48:54.683871  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.683884  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:54.683892  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:54.683954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:54.717391  447486 cri.go:89] found id: ""
	I1030 19:48:54.717421  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.717430  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:54.717436  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:54.717495  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:54.753783  447486 cri.go:89] found id: ""
	I1030 19:48:54.753812  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.753821  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:54.753827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:54.753878  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:54.788231  447486 cri.go:89] found id: ""
	I1030 19:48:54.788270  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.788282  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:54.788291  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:54.788359  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:54.823949  447486 cri.go:89] found id: ""
	I1030 19:48:54.823989  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.824001  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:54.824014  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:54.824052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:54.838936  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:54.838967  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:54.911785  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:54.911812  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:54.911825  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:54.993268  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:54.993302  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:55.032557  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:55.032588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:55.481330  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.482183  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:53.940343  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:56.439870  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.333262  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:59.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.588726  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:57.603010  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:57.603085  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:57.636499  447486 cri.go:89] found id: ""
	I1030 19:48:57.636531  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.636542  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:57.636551  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:57.636624  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:57.671698  447486 cri.go:89] found id: ""
	I1030 19:48:57.671728  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.671739  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:57.671748  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:57.671815  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:57.707387  447486 cri.go:89] found id: ""
	I1030 19:48:57.707414  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.707422  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:57.707431  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:57.707482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:57.745404  447486 cri.go:89] found id: ""
	I1030 19:48:57.745432  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.745440  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:57.745447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:57.745507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:57.784874  447486 cri.go:89] found id: ""
	I1030 19:48:57.784903  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.784912  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:57.784919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:57.784984  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:57.824663  447486 cri.go:89] found id: ""
	I1030 19:48:57.824697  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.824707  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:57.824713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:57.824773  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:57.862542  447486 cri.go:89] found id: ""
	I1030 19:48:57.862581  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.862593  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:57.862601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:57.862669  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:57.897901  447486 cri.go:89] found id: ""
	I1030 19:48:57.897935  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.897947  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:57.897959  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:57.897974  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.951898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:57.951936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:57.966282  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:57.966327  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:58.035515  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:58.035546  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:58.035562  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:58.114825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:58.114876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:00.705537  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:00.719589  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:00.719672  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:00.762299  447486 cri.go:89] found id: ""
	I1030 19:49:00.762330  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.762338  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:00.762356  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:00.762438  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:00.802228  447486 cri.go:89] found id: ""
	I1030 19:49:00.802259  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.802268  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:00.802275  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:00.802345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:00.836531  447486 cri.go:89] found id: ""
	I1030 19:49:00.836557  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.836565  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:00.836572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:00.836630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:00.869332  447486 cri.go:89] found id: ""
	I1030 19:49:00.869360  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.869369  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:00.869375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:00.869437  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:00.904643  447486 cri.go:89] found id: ""
	I1030 19:49:00.904675  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.904684  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:00.904691  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:00.904768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:00.939020  447486 cri.go:89] found id: ""
	I1030 19:49:00.939050  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.939061  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:00.939068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:00.939142  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:00.974586  447486 cri.go:89] found id: ""
	I1030 19:49:00.974625  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.974638  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:00.974646  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:00.974707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:01.009337  447486 cri.go:89] found id: ""
	I1030 19:49:01.009375  447486 logs.go:282] 0 containers: []
	W1030 19:49:01.009386  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:01.009399  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:01.009416  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:01.067087  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:01.067125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:01.081681  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:01.081713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:01.153057  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:01.153082  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:01.153096  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:01.236113  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:01.236153  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:59.981252  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.981799  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:58.938430  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:00.940905  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.333854  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.334325  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.774056  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:03.788395  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:03.788482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:03.823847  447486 cri.go:89] found id: ""
	I1030 19:49:03.823880  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.823892  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:03.823900  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:03.823973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:03.864776  447486 cri.go:89] found id: ""
	I1030 19:49:03.864807  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.864819  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:03.864827  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:03.864890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:03.912516  447486 cri.go:89] found id: ""
	I1030 19:49:03.912572  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.912585  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:03.912593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:03.912660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:03.962459  447486 cri.go:89] found id: ""
	I1030 19:49:03.962509  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.962521  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:03.962530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:03.962602  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:04.019107  447486 cri.go:89] found id: ""
	I1030 19:49:04.019143  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.019152  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:04.019159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:04.019217  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:04.054016  447486 cri.go:89] found id: ""
	I1030 19:49:04.054047  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.054056  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:04.054063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:04.054140  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:04.089907  447486 cri.go:89] found id: ""
	I1030 19:49:04.089938  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.089948  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:04.089955  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:04.090007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:04.128081  447486 cri.go:89] found id: ""
	I1030 19:49:04.128110  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.128118  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:04.128128  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:04.128142  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:04.182419  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:04.182462  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:04.196909  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:04.196941  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:04.267267  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:04.267298  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:04.267317  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:04.346826  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:04.346876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:03.984259  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.481362  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.438786  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.938707  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.939642  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.334541  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.834233  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.887266  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:06.902462  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:06.902554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:06.938850  447486 cri.go:89] found id: ""
	I1030 19:49:06.938880  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.938891  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:06.938899  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:06.938961  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:06.983284  447486 cri.go:89] found id: ""
	I1030 19:49:06.983315  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.983330  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:06.983339  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:06.983406  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:07.016332  447486 cri.go:89] found id: ""
	I1030 19:49:07.016359  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.016369  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:07.016375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:07.016428  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:07.051425  447486 cri.go:89] found id: ""
	I1030 19:49:07.051459  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.051471  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:07.051480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:07.051550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:07.083396  447486 cri.go:89] found id: ""
	I1030 19:49:07.083429  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.083437  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:07.083444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:07.083507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:07.116616  447486 cri.go:89] found id: ""
	I1030 19:49:07.116646  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.116654  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:07.116661  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:07.116728  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:07.149219  447486 cri.go:89] found id: ""
	I1030 19:49:07.149251  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.149259  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:07.149265  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:07.149318  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:07.188404  447486 cri.go:89] found id: ""
	I1030 19:49:07.188435  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.188444  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:07.188454  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:07.188468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:07.247600  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:07.247640  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:07.262196  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:07.262231  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:07.332998  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:07.333031  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:07.333048  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:07.415322  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:07.415367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:09.958278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:09.972983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:09.973068  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:10.016768  447486 cri.go:89] found id: ""
	I1030 19:49:10.016801  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.016810  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:10.016818  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:10.016885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:10.052958  447486 cri.go:89] found id: ""
	I1030 19:49:10.052992  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.053002  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:10.053009  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:10.053063  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:10.089062  447486 cri.go:89] found id: ""
	I1030 19:49:10.089094  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.089105  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:10.089120  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:10.089196  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:10.126084  447486 cri.go:89] found id: ""
	I1030 19:49:10.126114  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.126123  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:10.126130  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:10.126182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:10.171670  447486 cri.go:89] found id: ""
	I1030 19:49:10.171702  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.171712  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:10.171720  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:10.171785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:10.210243  447486 cri.go:89] found id: ""
	I1030 19:49:10.210285  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.210293  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:10.210300  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:10.210366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:10.253012  447486 cri.go:89] found id: ""
	I1030 19:49:10.253056  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.253069  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:10.253078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:10.253155  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:10.287948  447486 cri.go:89] found id: ""
	I1030 19:49:10.287999  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.288009  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:10.288021  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:10.288036  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:10.341362  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:10.341405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:10.355769  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:10.355798  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:10.429469  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:10.429500  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:10.429518  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:10.509812  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:10.509851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:08.488059  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.981606  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.982128  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.438903  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.939592  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.334087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.336238  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:14.833365  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:13.053064  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:13.069063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:13.069136  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:13.108457  447486 cri.go:89] found id: ""
	I1030 19:49:13.108492  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.108505  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:13.108513  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:13.108582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:13.146481  447486 cri.go:89] found id: ""
	I1030 19:49:13.146523  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.146534  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:13.146542  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:13.146595  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:13.187088  447486 cri.go:89] found id: ""
	I1030 19:49:13.187118  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.187129  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:13.187137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:13.187200  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:13.226913  447486 cri.go:89] found id: ""
	I1030 19:49:13.226948  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.226960  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:13.226968  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:13.227038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:13.262632  447486 cri.go:89] found id: ""
	I1030 19:49:13.262661  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.262669  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:13.262676  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:13.262726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:13.296877  447486 cri.go:89] found id: ""
	I1030 19:49:13.296906  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.296915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:13.296922  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:13.296983  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:13.334907  447486 cri.go:89] found id: ""
	I1030 19:49:13.334939  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.334949  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:13.334956  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:13.335021  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:13.369386  447486 cri.go:89] found id: ""
	I1030 19:49:13.369430  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.369443  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:13.369456  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:13.369472  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:13.423095  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:13.423130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:13.437039  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:13.437067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:13.512619  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:13.512648  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:13.512663  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:13.596982  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:13.597023  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:16.135623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:16.150407  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:16.150502  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:16.188771  447486 cri.go:89] found id: ""
	I1030 19:49:16.188811  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.188823  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:16.188832  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:16.188907  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:16.221554  447486 cri.go:89] found id: ""
	I1030 19:49:16.221589  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.221598  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:16.221604  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:16.221655  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:16.255567  447486 cri.go:89] found id: ""
	I1030 19:49:16.255595  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.255609  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:16.255616  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:16.255667  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:16.289820  447486 cri.go:89] found id: ""
	I1030 19:49:16.289855  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.289866  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:16.289874  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:16.289935  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:16.324415  447486 cri.go:89] found id: ""
	I1030 19:49:16.324449  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.324464  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:16.324471  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:16.324533  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:16.360789  447486 cri.go:89] found id: ""
	I1030 19:49:16.360825  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.360848  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:16.360856  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:16.360922  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:16.395066  447486 cri.go:89] found id: ""
	I1030 19:49:16.395093  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.395101  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:16.395107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:16.395158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:16.429220  447486 cri.go:89] found id: ""
	I1030 19:49:16.429261  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.429273  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:16.429286  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:16.429305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:16.481209  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:16.481250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:16.495353  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:16.495383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:16.563979  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:16.564006  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:16.564022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:16.645166  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:16.645205  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:15.481438  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.482846  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:15.440389  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.938724  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:16.833433  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.335773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.185478  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:19.199270  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:19.199337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:19.242426  447486 cri.go:89] found id: ""
	I1030 19:49:19.242455  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.242464  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:19.242474  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:19.242556  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:19.284061  447486 cri.go:89] found id: ""
	I1030 19:49:19.284092  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.284102  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:19.284108  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:19.284178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:19.317373  447486 cri.go:89] found id: ""
	I1030 19:49:19.317407  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.317420  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:19.317428  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:19.317491  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:19.354222  447486 cri.go:89] found id: ""
	I1030 19:49:19.354250  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.354259  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:19.354267  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:19.354329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:19.392948  447486 cri.go:89] found id: ""
	I1030 19:49:19.392980  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.392989  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:19.392996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:19.393053  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:19.438023  447486 cri.go:89] found id: ""
	I1030 19:49:19.438055  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.438066  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:19.438074  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:19.438144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:19.472179  447486 cri.go:89] found id: ""
	I1030 19:49:19.472208  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.472218  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:19.472226  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:19.472283  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:19.507164  447486 cri.go:89] found id: ""
	I1030 19:49:19.507195  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.507203  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:19.507213  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:19.507226  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:19.520898  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:19.520935  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:19.592204  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:19.592234  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:19.592263  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:19.668994  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:19.669045  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.707208  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:19.707240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:19.981085  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.981344  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.939994  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.439696  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.833592  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.333379  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.263035  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:22.276999  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:22.277089  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:22.310969  447486 cri.go:89] found id: ""
	I1030 19:49:22.311006  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.311017  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:22.311026  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:22.311097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:22.346282  447486 cri.go:89] found id: ""
	I1030 19:49:22.346311  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.346324  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:22.346332  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:22.346401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:22.384324  447486 cri.go:89] found id: ""
	I1030 19:49:22.384354  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.384372  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:22.384381  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:22.384441  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:22.419465  447486 cri.go:89] found id: ""
	I1030 19:49:22.419498  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.419509  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:22.419518  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:22.419582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:22.456161  447486 cri.go:89] found id: ""
	I1030 19:49:22.456196  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.456204  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:22.456211  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:22.456280  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:22.489075  447486 cri.go:89] found id: ""
	I1030 19:49:22.489102  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.489110  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:22.489119  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:22.489181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:22.521752  447486 cri.go:89] found id: ""
	I1030 19:49:22.521780  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.521789  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:22.521796  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:22.521847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:22.554946  447486 cri.go:89] found id: ""
	I1030 19:49:22.554985  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.554997  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:22.555010  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:22.555025  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:22.567877  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:22.567909  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:22.640062  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:22.640094  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:22.640110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:22.714946  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:22.714985  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:22.755560  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:22.755595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.306379  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:25.320883  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:25.320963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:25.356737  447486 cri.go:89] found id: ""
	I1030 19:49:25.356771  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.356782  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:25.356791  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:25.356856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:25.393371  447486 cri.go:89] found id: ""
	I1030 19:49:25.393409  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.393420  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:25.393429  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:25.393500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:25.428379  447486 cri.go:89] found id: ""
	I1030 19:49:25.428411  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.428425  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:25.428433  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:25.428505  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:25.473516  447486 cri.go:89] found id: ""
	I1030 19:49:25.473551  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.473562  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:25.473572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:25.473649  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:25.512508  447486 cri.go:89] found id: ""
	I1030 19:49:25.512535  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.512544  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:25.512550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:25.512611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:25.547646  447486 cri.go:89] found id: ""
	I1030 19:49:25.547691  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.547705  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:25.547713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:25.547782  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:25.582314  447486 cri.go:89] found id: ""
	I1030 19:49:25.582347  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.582356  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:25.582364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:25.582415  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:25.617305  447486 cri.go:89] found id: ""
	I1030 19:49:25.617343  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.617354  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:25.617367  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:25.617383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:25.658245  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:25.658283  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.710559  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:25.710598  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:25.724961  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:25.724995  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:25.796252  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:25.796283  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:25.796300  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:23.984899  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:25.985999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.939599  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:27.440032  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:26.334407  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.334588  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.374633  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:28.389468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:28.389549  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:28.425747  447486 cri.go:89] found id: ""
	I1030 19:49:28.425780  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.425792  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:28.425800  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:28.425956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:28.465221  447486 cri.go:89] found id: ""
	I1030 19:49:28.465258  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.465291  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:28.465303  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:28.465371  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:28.504184  447486 cri.go:89] found id: ""
	I1030 19:49:28.504217  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.504230  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:28.504240  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:28.504295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:28.536198  447486 cri.go:89] found id: ""
	I1030 19:49:28.536234  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.536247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:28.536255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:28.536340  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:28.572194  447486 cri.go:89] found id: ""
	I1030 19:49:28.572228  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.572240  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:28.572248  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:28.572312  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:28.608794  447486 cri.go:89] found id: ""
	I1030 19:49:28.608826  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.608838  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:28.608846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:28.608914  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:28.641664  447486 cri.go:89] found id: ""
	I1030 19:49:28.641698  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.641706  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:28.641714  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:28.641768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:28.675756  447486 cri.go:89] found id: ""
	I1030 19:49:28.675790  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.675800  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:28.675812  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:28.675829  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:28.690203  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:28.690237  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:28.755647  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:28.755674  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:28.755690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.837116  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:28.837149  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:28.877195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:28.877232  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.428091  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:31.442537  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:31.442619  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:31.479911  447486 cri.go:89] found id: ""
	I1030 19:49:31.479942  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.479953  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:31.479961  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:31.480029  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:31.517015  447486 cri.go:89] found id: ""
	I1030 19:49:31.517042  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.517050  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:31.517056  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:31.517107  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:31.549858  447486 cri.go:89] found id: ""
	I1030 19:49:31.549891  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.549900  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:31.549907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:31.549971  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:31.583490  447486 cri.go:89] found id: ""
	I1030 19:49:31.583524  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.583536  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:31.583551  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:31.583618  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:31.618270  447486 cri.go:89] found id: ""
	I1030 19:49:31.618308  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.618320  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:31.618328  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:31.618397  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:31.655416  447486 cri.go:89] found id: ""
	I1030 19:49:31.655448  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.655460  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:31.655468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:31.655530  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:31.689708  447486 cri.go:89] found id: ""
	I1030 19:49:31.689740  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.689751  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:31.689759  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:31.689823  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:31.724179  447486 cri.go:89] found id: ""
	I1030 19:49:31.724208  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.724219  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:31.724233  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:31.724249  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.774900  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:31.774939  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:31.788606  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:31.788635  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:28.481673  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.980999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:32.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:29.938506  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:31.940276  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.834322  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:33.333091  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:49:31.861360  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:31.861385  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:31.861398  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:31.935856  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:31.935896  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.477313  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:34.491530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:34.491597  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:34.525105  447486 cri.go:89] found id: ""
	I1030 19:49:34.525136  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.525145  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:34.525153  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:34.525215  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:34.560449  447486 cri.go:89] found id: ""
	I1030 19:49:34.560483  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.560495  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:34.560503  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:34.560558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:34.595278  447486 cri.go:89] found id: ""
	I1030 19:49:34.595325  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.595335  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:34.595342  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:34.595395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:34.628486  447486 cri.go:89] found id: ""
	I1030 19:49:34.628521  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.628533  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:34.628542  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:34.628614  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:34.663410  447486 cri.go:89] found id: ""
	I1030 19:49:34.663438  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.663448  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:34.663456  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:34.663520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:34.697053  447486 cri.go:89] found id: ""
	I1030 19:49:34.697086  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.697099  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:34.697107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:34.697178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:34.730910  447486 cri.go:89] found id: ""
	I1030 19:49:34.730943  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.730955  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:34.730963  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:34.731034  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:34.765725  447486 cri.go:89] found id: ""
	I1030 19:49:34.765762  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.765774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:34.765786  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:34.765807  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.802750  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:34.802786  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:34.853576  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:34.853614  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:34.868102  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:34.868139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:34.939985  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:34.940015  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:34.940027  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:35.480658  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.481068  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:34.442576  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:36.940088  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:35.333400  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.334425  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.833330  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.516479  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:37.529386  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:37.529453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:37.565889  447486 cri.go:89] found id: ""
	I1030 19:49:37.565923  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.565936  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:37.565945  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:37.566007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:37.598771  447486 cri.go:89] found id: ""
	I1030 19:49:37.598801  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.598811  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:37.598817  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:37.598869  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:37.632678  447486 cri.go:89] found id: ""
	I1030 19:49:37.632705  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.632714  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:37.632735  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:37.632795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:37.666642  447486 cri.go:89] found id: ""
	I1030 19:49:37.666673  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.666682  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:37.666688  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:37.666748  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:37.701203  447486 cri.go:89] found id: ""
	I1030 19:49:37.701233  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.701242  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:37.701249  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:37.701324  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:37.735614  447486 cri.go:89] found id: ""
	I1030 19:49:37.735649  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.735661  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:37.735669  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:37.735738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:37.771381  447486 cri.go:89] found id: ""
	I1030 19:49:37.771418  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.771430  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:37.771439  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:37.771501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:37.807870  447486 cri.go:89] found id: ""
	I1030 19:49:37.807908  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.807922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:37.807935  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:37.807952  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:37.860334  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:37.860367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:37.874340  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:37.874371  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:37.952874  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:37.952903  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:37.952916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:38.045318  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:38.045356  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:40.591278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:40.604970  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:40.605050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:40.639839  447486 cri.go:89] found id: ""
	I1030 19:49:40.639869  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.639880  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:40.639889  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:40.639952  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:40.674046  447486 cri.go:89] found id: ""
	I1030 19:49:40.674077  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.674087  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:40.674093  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:40.674164  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:40.710759  447486 cri.go:89] found id: ""
	I1030 19:49:40.710794  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.710806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:40.710815  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:40.710880  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:40.752439  447486 cri.go:89] found id: ""
	I1030 19:49:40.752471  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.752484  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:40.752493  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:40.752548  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:40.787985  447486 cri.go:89] found id: ""
	I1030 19:49:40.788021  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.788034  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:40.788042  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:40.788102  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:40.829282  447486 cri.go:89] found id: ""
	I1030 19:49:40.829320  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.829333  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:40.829341  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:40.829409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:40.863911  447486 cri.go:89] found id: ""
	I1030 19:49:40.863944  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.863953  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:40.863959  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:40.864026  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:40.901239  447486 cri.go:89] found id: ""
	I1030 19:49:40.901275  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.901287  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:40.901300  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:40.901321  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:40.955283  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:40.955323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:40.968733  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:40.968766  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:41.040213  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:41.040242  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:41.040256  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:41.125992  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:41.126035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:39.481593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.483403  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.441009  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.939182  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.834082  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:44.332428  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.667949  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:43.681633  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:43.681705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:43.725038  447486 cri.go:89] found id: ""
	I1030 19:49:43.725076  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.725085  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:43.725091  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:43.725149  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.761438  447486 cri.go:89] found id: ""
	I1030 19:49:43.761473  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.761486  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:43.761494  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:43.761566  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:43.795299  447486 cri.go:89] found id: ""
	I1030 19:49:43.795335  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.795347  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:43.795355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:43.795431  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:43.830545  447486 cri.go:89] found id: ""
	I1030 19:49:43.830582  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.830594  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:43.830601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:43.830670  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:43.867632  447486 cri.go:89] found id: ""
	I1030 19:49:43.867664  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.867676  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:43.867684  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:43.867753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:43.901315  447486 cri.go:89] found id: ""
	I1030 19:49:43.901346  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.901355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:43.901361  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:43.901412  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:43.934928  447486 cri.go:89] found id: ""
	I1030 19:49:43.934963  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.934975  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:43.934983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:43.935048  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:43.975407  447486 cri.go:89] found id: ""
	I1030 19:49:43.975441  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.975451  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:43.975472  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:43.975497  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:44.019281  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:44.019310  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:44.072363  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:44.072402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:44.085508  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:44.085538  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:44.159634  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:44.159666  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:44.159682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:46.739662  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:46.753190  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:46.753252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:46.790167  447486 cri.go:89] found id: ""
	I1030 19:49:46.790202  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.790211  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:46.790217  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:46.790272  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.988689  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.481139  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.939246  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.438847  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.333066  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.335463  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.828187  447486 cri.go:89] found id: ""
	I1030 19:49:46.828221  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.828230  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:46.828237  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:46.828305  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:46.865499  447486 cri.go:89] found id: ""
	I1030 19:49:46.865539  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.865551  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:46.865559  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:46.865612  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:46.899591  447486 cri.go:89] found id: ""
	I1030 19:49:46.899616  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.899625  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:46.899632  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:46.899681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:46.934818  447486 cri.go:89] found id: ""
	I1030 19:49:46.934850  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.934860  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:46.934868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:46.934933  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:46.971298  447486 cri.go:89] found id: ""
	I1030 19:49:46.971328  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.971340  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:46.971349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:46.971418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:47.010783  447486 cri.go:89] found id: ""
	I1030 19:49:47.010814  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.010825  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:47.010832  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:47.010896  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:47.044343  447486 cri.go:89] found id: ""
	I1030 19:49:47.044380  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.044392  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:47.044405  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:47.044421  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:47.094425  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:47.094459  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:47.110339  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:47.110368  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:47.183262  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:47.183290  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:47.183305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:47.262611  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:47.262651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:49.808195  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:49.821889  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:49.821963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:49.857296  447486 cri.go:89] found id: ""
	I1030 19:49:49.857339  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.857351  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:49.857359  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:49.857413  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:49.892614  447486 cri.go:89] found id: ""
	I1030 19:49:49.892648  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.892660  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:49.892668  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:49.892732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:49.929835  447486 cri.go:89] found id: ""
	I1030 19:49:49.929862  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.929871  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:49.929878  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:49.929940  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:49.965341  447486 cri.go:89] found id: ""
	I1030 19:49:49.965371  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.965379  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:49.965392  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:49.965449  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:50.000134  447486 cri.go:89] found id: ""
	I1030 19:49:50.000165  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.000177  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:50.000188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:50.000259  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:50.033848  447486 cri.go:89] found id: ""
	I1030 19:49:50.033876  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.033885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:50.033891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:50.033943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:50.073315  447486 cri.go:89] found id: ""
	I1030 19:49:50.073344  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.073354  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:50.073360  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:50.073421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:50.114232  447486 cri.go:89] found id: ""
	I1030 19:49:50.114266  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.114277  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:50.114290  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:50.114311  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:50.185407  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:50.185434  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:50.185448  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:50.270447  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:50.270494  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:50.308825  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:50.308855  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:50.363376  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:50.363417  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:48.982027  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:51.482972  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.439801  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.939120  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.833062  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.833132  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.834352  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.878475  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:52.892013  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:52.892088  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:52.928085  447486 cri.go:89] found id: ""
	I1030 19:49:52.928117  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.928126  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:52.928132  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:52.928185  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:52.963377  447486 cri.go:89] found id: ""
	I1030 19:49:52.963413  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.963426  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:52.963434  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:52.963493  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:53.000799  447486 cri.go:89] found id: ""
	I1030 19:49:53.000825  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.000834  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:53.000840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:53.000912  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:53.037429  447486 cri.go:89] found id: ""
	I1030 19:49:53.037463  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.037472  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:53.037478  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:53.037534  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:53.072392  447486 cri.go:89] found id: ""
	I1030 19:49:53.072425  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.072433  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:53.072446  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:53.072520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:53.108925  447486 cri.go:89] found id: ""
	I1030 19:49:53.108957  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.108970  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:53.108978  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:53.109050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:53.145409  447486 cri.go:89] found id: ""
	I1030 19:49:53.145445  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.145457  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:53.145466  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:53.145536  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:53.180756  447486 cri.go:89] found id: ""
	I1030 19:49:53.180784  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.180793  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:53.180803  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:53.180817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:53.234960  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:53.235010  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:53.249224  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:53.249255  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:53.313223  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:53.313245  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:53.313264  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:53.399715  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:53.399758  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.944332  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:55.961546  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:55.961616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:56.020603  447486 cri.go:89] found id: ""
	I1030 19:49:56.020634  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.020647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:56.020654  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:56.020725  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:56.065134  447486 cri.go:89] found id: ""
	I1030 19:49:56.065162  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.065170  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:56.065176  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:56.065239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:56.101358  447486 cri.go:89] found id: ""
	I1030 19:49:56.101386  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.101396  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:56.101405  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:56.101473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:56.135762  447486 cri.go:89] found id: ""
	I1030 19:49:56.135795  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.135805  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:56.135811  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:56.135863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:56.171336  447486 cri.go:89] found id: ""
	I1030 19:49:56.171371  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.171383  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:56.171391  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:56.171461  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:56.205643  447486 cri.go:89] found id: ""
	I1030 19:49:56.205674  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.205685  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:56.205693  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:56.205759  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:56.240853  447486 cri.go:89] found id: ""
	I1030 19:49:56.240885  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.240894  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:56.240901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:56.240973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:56.276577  447486 cri.go:89] found id: ""
	I1030 19:49:56.276612  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.276623  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:56.276636  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:56.276651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:56.328180  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:56.328220  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:56.341895  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:56.341923  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:56.414492  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:56.414523  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:56.414540  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:56.498439  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:56.498498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:53.980916  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:55.983077  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:53.439070  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.940107  446887 pod_ready.go:82] duration metric: took 4m0.007533629s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:49:54.940137  446887 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:49:54.940149  446887 pod_ready.go:39] duration metric: took 4m6.552777198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:49:54.940170  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:49:54.940206  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:54.940264  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:54.992682  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:54.992715  446887 cri.go:89] found id: ""
	I1030 19:49:54.992727  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:54.992790  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:54.997251  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:54.997313  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:55.034504  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.034542  446887 cri.go:89] found id: ""
	I1030 19:49:55.034552  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:55.034616  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.039551  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:55.039624  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:55.083294  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.083326  446887 cri.go:89] found id: ""
	I1030 19:49:55.083336  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:55.083407  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.087866  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:55.087932  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:55.125250  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.125353  446887 cri.go:89] found id: ""
	I1030 19:49:55.125372  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:55.125446  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.130688  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:55.130747  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:55.168792  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.168814  446887 cri.go:89] found id: ""
	I1030 19:49:55.168822  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:55.168877  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.173360  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:55.173424  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:55.209566  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.209590  446887 cri.go:89] found id: ""
	I1030 19:49:55.209599  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:55.209659  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.214190  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:55.214263  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:55.257056  446887 cri.go:89] found id: ""
	I1030 19:49:55.257091  446887 logs.go:282] 0 containers: []
	W1030 19:49:55.257103  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:55.257111  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:55.257165  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:55.300194  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.300224  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.300229  446887 cri.go:89] found id: ""
	I1030 19:49:55.300238  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:55.300290  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.304750  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.309249  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:49:55.309276  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.363959  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:49:55.363994  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.412667  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:49:55.412703  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.455381  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:55.455420  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.494657  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:55.494689  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.552740  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:55.552773  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:55.627724  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:55.627765  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:55.642263  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:49:55.642300  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:55.691079  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:55.691111  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.730111  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:49:55.730151  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.785155  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:55.785189  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:55.924592  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:55.924633  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.970229  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:55.970267  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:57.333378  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.334394  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.039071  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.053648  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.053722  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.097620  447486 cri.go:89] found id: ""
	I1030 19:49:59.097650  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.097661  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:59.097669  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.097738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.139136  447486 cri.go:89] found id: ""
	I1030 19:49:59.139176  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.139188  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:59.139199  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.139270  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.180322  447486 cri.go:89] found id: ""
	I1030 19:49:59.180361  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.180371  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:59.180384  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.180453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.217374  447486 cri.go:89] found id: ""
	I1030 19:49:59.217422  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.217434  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:59.217443  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.217498  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.257857  447486 cri.go:89] found id: ""
	I1030 19:49:59.257884  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.257894  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:59.257901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.257968  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.297679  447486 cri.go:89] found id: ""
	I1030 19:49:59.297713  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.297724  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:59.297733  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.297795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.341469  447486 cri.go:89] found id: ""
	I1030 19:49:59.341499  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.341509  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.341517  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:59.341587  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:59.381677  447486 cri.go:89] found id: ""
	I1030 19:49:59.381704  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.381713  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:59.381723  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.381735  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.441396  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.441428  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.457105  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:59.457139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:59.532023  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:59.532051  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.532064  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:59.621685  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:59.621720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:58.481425  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:00.481912  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.482130  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.010542  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.027463  446887 api_server.go:72] duration metric: took 4m17.923507495s to wait for apiserver process to appear ...
	I1030 19:49:59.027488  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:49:59.027524  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.027571  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.066364  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:59.066391  446887 cri.go:89] found id: ""
	I1030 19:49:59.066401  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:59.066463  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.072454  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.072535  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.118043  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:59.118072  446887 cri.go:89] found id: ""
	I1030 19:49:59.118081  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:59.118142  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.122806  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.122883  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.167475  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:59.167500  446887 cri.go:89] found id: ""
	I1030 19:49:59.167511  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:59.167577  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.172181  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.172255  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.210384  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:59.210411  446887 cri.go:89] found id: ""
	I1030 19:49:59.210419  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:59.210473  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.216032  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.216114  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.269770  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.269791  446887 cri.go:89] found id: ""
	I1030 19:49:59.269799  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:59.269851  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.274161  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.274239  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.313907  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.313936  446887 cri.go:89] found id: ""
	I1030 19:49:59.313946  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:59.314019  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.320687  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.320766  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.367710  446887 cri.go:89] found id: ""
	I1030 19:49:59.367740  446887 logs.go:282] 0 containers: []
	W1030 19:49:59.367752  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.367759  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:59.367826  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:59.422716  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.422744  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.422750  446887 cri.go:89] found id: ""
	I1030 19:49:59.422763  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:59.422827  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.428399  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.432404  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:59.432429  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.475798  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.475839  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.548960  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.548998  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.566839  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:59.566870  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.606181  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:59.606210  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.670134  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:59.670170  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.709224  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.709253  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:00.132147  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:00.132194  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:00.181124  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:00.181171  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:00.306545  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:00.306585  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:00.352129  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:00.352169  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:00.398083  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:00.398119  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:00.439813  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:00.439851  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:02.978477  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:50:02.983776  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:50:02.984791  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:50:02.984814  446887 api_server.go:131] duration metric: took 3.957319689s to wait for apiserver health ...
	I1030 19:50:02.984822  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:50:02.984844  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.984902  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:03.024715  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:03.024745  446887 cri.go:89] found id: ""
	I1030 19:50:03.024754  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:50:03.024820  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.029121  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:03.029188  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:03.064462  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:03.064489  446887 cri.go:89] found id: ""
	I1030 19:50:03.064500  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:50:03.064564  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.068587  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:03.068665  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:03.106880  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.106902  446887 cri.go:89] found id: ""
	I1030 19:50:03.106910  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:50:03.106978  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.111313  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:03.111388  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:03.155761  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:03.155791  446887 cri.go:89] found id: ""
	I1030 19:50:03.155801  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:50:03.155864  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.160616  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:03.160686  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:03.199028  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:03.199063  446887 cri.go:89] found id: ""
	I1030 19:50:03.199074  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:50:03.199149  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.203348  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:03.203414  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:03.257739  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:03.257769  446887 cri.go:89] found id: ""
	I1030 19:50:03.257780  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:50:03.257845  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.263357  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:03.263417  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:03.309752  446887 cri.go:89] found id: ""
	I1030 19:50:03.309779  446887 logs.go:282] 0 containers: []
	W1030 19:50:03.309787  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:03.309793  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:50:03.309843  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:50:03.351570  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.351593  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.351597  446887 cri.go:89] found id: ""
	I1030 19:50:03.351605  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:50:03.351656  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.364414  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.369070  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:03.369097  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:03.385129  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:03.385161  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:01.833117  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:04.334645  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.170623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:02.184885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.184975  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:02.223811  447486 cri.go:89] found id: ""
	I1030 19:50:02.223841  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.223849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:02.223856  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:02.223908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:02.260454  447486 cri.go:89] found id: ""
	I1030 19:50:02.260481  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.260491  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:02.260497  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:02.260554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:02.296542  447486 cri.go:89] found id: ""
	I1030 19:50:02.296569  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.296577  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:02.296583  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:02.296631  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:02.332168  447486 cri.go:89] found id: ""
	I1030 19:50:02.332199  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.332211  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:02.332219  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:02.332287  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:02.366539  447486 cri.go:89] found id: ""
	I1030 19:50:02.366575  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.366586  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:02.366595  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:02.366659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:02.401859  447486 cri.go:89] found id: ""
	I1030 19:50:02.401894  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.401915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:02.401923  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:02.401991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:02.446061  447486 cri.go:89] found id: ""
	I1030 19:50:02.446097  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.446108  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:02.446116  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:02.446181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:02.488233  447486 cri.go:89] found id: ""
	I1030 19:50:02.488257  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.488265  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:02.488274  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:02.488294  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:02.544517  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:02.544554  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:02.558143  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:02.558179  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:02.628679  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:02.628706  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:02.628723  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:02.710246  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:02.710293  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.254846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:05.269536  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:05.269599  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:05.303724  447486 cri.go:89] found id: ""
	I1030 19:50:05.303753  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.303761  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:05.303767  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:05.303819  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:05.339268  447486 cri.go:89] found id: ""
	I1030 19:50:05.339301  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.339322  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:05.339330  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:05.339405  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:05.375892  447486 cri.go:89] found id: ""
	I1030 19:50:05.375923  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.375930  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:05.375936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:05.375988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:05.413197  447486 cri.go:89] found id: ""
	I1030 19:50:05.413232  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.413243  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:05.413252  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:05.413329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:05.452095  447486 cri.go:89] found id: ""
	I1030 19:50:05.452122  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.452130  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:05.452137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:05.452193  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:05.490694  447486 cri.go:89] found id: ""
	I1030 19:50:05.490731  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.490744  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:05.490753  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:05.490808  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:05.523961  447486 cri.go:89] found id: ""
	I1030 19:50:05.523992  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.524001  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:05.524008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:05.524060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:05.558631  447486 cri.go:89] found id: ""
	I1030 19:50:05.558664  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.558673  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:05.558684  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:05.558699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.596929  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:05.596958  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:05.647294  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:05.647332  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:05.661349  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:05.661377  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:05.730268  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:05.730299  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:05.730323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.434675  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:03.434708  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.474767  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:50:03.474803  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.510301  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:03.510331  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.887871  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:50:03.887912  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.930529  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:03.930563  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:03.971064  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:03.971102  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:04.040593  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:04.040632  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:04.157377  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:04.157418  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:04.205779  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:04.205816  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:04.251434  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:50:04.251470  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:04.288713  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:50:04.288747  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:06.849298  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:50:06.849329  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.849334  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.849340  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.849352  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.849358  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.849367  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.849373  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.849377  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.849384  446887 system_pods.go:74] duration metric: took 3.864557334s to wait for pod list to return data ...
	I1030 19:50:06.849394  446887 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:50:06.852015  446887 default_sa.go:45] found service account: "default"
	I1030 19:50:06.852037  446887 default_sa.go:55] duration metric: took 2.63686ms for default service account to be created ...
	I1030 19:50:06.852046  446887 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:50:06.856920  446887 system_pods.go:86] 8 kube-system pods found
	I1030 19:50:06.856945  446887 system_pods.go:89] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.856953  446887 system_pods.go:89] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.856959  446887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.856966  446887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.856972  446887 system_pods.go:89] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.856979  446887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.856996  446887 system_pods.go:89] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.857005  446887 system_pods.go:89] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.857015  446887 system_pods.go:126] duration metric: took 4.962745ms to wait for k8s-apps to be running ...
	I1030 19:50:06.857025  446887 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:50:06.857086  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:06.874176  446887 system_svc.go:56] duration metric: took 17.144628ms WaitForService to wait for kubelet
	I1030 19:50:06.874206  446887 kubeadm.go:582] duration metric: took 4m25.770253397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:50:06.874230  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:50:06.876962  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:50:06.876987  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:50:06.877004  446887 node_conditions.go:105] duration metric: took 2.768174ms to run NodePressure ...
	I1030 19:50:06.877025  446887 start.go:241] waiting for startup goroutines ...
	I1030 19:50:06.877034  446887 start.go:246] waiting for cluster config update ...
	I1030 19:50:06.877070  446887 start.go:255] writing updated cluster config ...
	I1030 19:50:06.877355  446887 ssh_runner.go:195] Run: rm -f paused
	I1030 19:50:06.927147  446887 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:50:06.929103  446887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-768989" cluster and "default" namespace by default
	I1030 19:50:04.981923  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.982630  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.834029  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.834616  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.312167  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:08.327121  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:08.327206  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:08.364871  447486 cri.go:89] found id: ""
	I1030 19:50:08.364905  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.364916  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:08.364924  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:08.364982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:08.399179  447486 cri.go:89] found id: ""
	I1030 19:50:08.399215  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.399225  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:08.399231  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:08.399286  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:08.434308  447486 cri.go:89] found id: ""
	I1030 19:50:08.434340  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.434350  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:08.434356  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:08.434409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:08.477152  447486 cri.go:89] found id: ""
	I1030 19:50:08.477184  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.477193  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:08.477204  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:08.477274  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:08.513678  447486 cri.go:89] found id: ""
	I1030 19:50:08.513706  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.513716  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:08.513725  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:08.513789  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:08.551427  447486 cri.go:89] found id: ""
	I1030 19:50:08.551459  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.551478  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:08.551485  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:08.551550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:08.584224  447486 cri.go:89] found id: ""
	I1030 19:50:08.584260  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.584272  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:08.584282  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:08.584351  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:08.617603  447486 cri.go:89] found id: ""
	I1030 19:50:08.617638  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.617649  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:08.617660  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:08.617674  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:08.694201  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:08.694229  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:08.694247  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.775457  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:08.775500  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:08.816452  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:08.816496  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:08.868077  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:08.868114  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.383130  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:11.397672  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:11.397758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:11.431923  447486 cri.go:89] found id: ""
	I1030 19:50:11.431959  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.431971  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:11.431980  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:11.432050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:11.466959  447486 cri.go:89] found id: ""
	I1030 19:50:11.466996  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.467009  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:11.467018  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:11.467093  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:11.506399  447486 cri.go:89] found id: ""
	I1030 19:50:11.506425  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.506437  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:11.506444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:11.506529  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:11.538606  447486 cri.go:89] found id: ""
	I1030 19:50:11.538635  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.538643  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:11.538649  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:11.538700  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:11.573265  447486 cri.go:89] found id: ""
	I1030 19:50:11.573296  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.573304  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:11.573310  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:11.573364  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:11.608522  447486 cri.go:89] found id: ""
	I1030 19:50:11.608549  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.608558  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:11.608569  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:11.608629  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:11.639758  447486 cri.go:89] found id: ""
	I1030 19:50:11.639784  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.639792  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:11.639797  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:11.639846  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:11.673381  447486 cri.go:89] found id: ""
	I1030 19:50:11.673414  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.673426  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:11.673439  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:11.673454  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:11.727368  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:11.727414  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.741267  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:11.741301  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:09.481159  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.483339  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.334468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:13.832615  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:50:11.808126  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:11.808158  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:11.808174  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:11.888676  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:11.888713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.431637  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:14.445315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:14.445392  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:14.482059  447486 cri.go:89] found id: ""
	I1030 19:50:14.482097  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.482110  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:14.482118  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:14.482186  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:14.520802  447486 cri.go:89] found id: ""
	I1030 19:50:14.520834  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.520843  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:14.520849  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:14.520900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:14.559965  447486 cri.go:89] found id: ""
	I1030 19:50:14.559996  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.560006  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:14.560012  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:14.560062  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:14.601831  447486 cri.go:89] found id: ""
	I1030 19:50:14.601865  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.601875  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:14.601881  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:14.601932  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:14.635307  447486 cri.go:89] found id: ""
	I1030 19:50:14.635339  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.635348  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:14.635355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:14.635418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:14.668618  447486 cri.go:89] found id: ""
	I1030 19:50:14.668648  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.668657  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:14.668664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:14.668726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:14.702597  447486 cri.go:89] found id: ""
	I1030 19:50:14.702633  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.702644  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:14.702653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:14.702715  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:14.736860  447486 cri.go:89] found id: ""
	I1030 19:50:14.736899  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.736911  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:14.736925  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:14.736942  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:14.822015  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:14.822060  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.860153  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:14.860195  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:14.912230  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:14.912269  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:14.927032  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:14.927067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:14.994401  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:13.975124  446965 pod_ready.go:82] duration metric: took 4m0.000158179s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	E1030 19:50:13.975173  446965 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" (will not retry!)
	I1030 19:50:13.975201  446965 pod_ready.go:39] duration metric: took 4m14.686087419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:13.975238  446965 kubeadm.go:597] duration metric: took 4m22.157012059s to restartPrimaryControlPlane
	W1030 19:50:13.975313  446965 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:13.975366  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:15.833986  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.835468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.494865  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:17.509934  447486 kubeadm.go:597] duration metric: took 4m3.074434895s to restartPrimaryControlPlane
	W1030 19:50:17.510016  447486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:17.510051  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:18.496415  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:18.512328  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:18.522293  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:18.532752  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:18.532772  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:18.532823  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:18.542501  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:18.542560  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:18.552660  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:18.562585  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:18.562649  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:18.572321  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.581633  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:18.581689  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.592770  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:18.602414  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:18.602477  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:18.612334  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:18.844753  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:20.333715  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:22.832817  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:24.833349  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:27.332723  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:29.335009  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:31.832584  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:33.834506  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:36.333902  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:38.833159  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:40.157555  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.182163055s)
	I1030 19:50:40.157637  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:40.174413  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:40.184817  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:40.195446  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:40.195475  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:40.195527  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:40.205509  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:40.205575  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:40.217343  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:40.227666  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:40.227729  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:40.237594  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.247151  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:40.247209  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.256854  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:40.266306  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:40.266379  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:40.276409  446965 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:40.322080  446965 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 19:50:40.322174  446965 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:50:40.433056  446965 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:50:40.433251  446965 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:50:40.433390  446965 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 19:50:40.445085  446965 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:50:40.447192  446965 out.go:235]   - Generating certificates and keys ...
	I1030 19:50:40.447301  446965 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:50:40.447395  446965 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:50:40.447512  446965 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:50:40.447600  446965 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:50:40.447735  446965 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:50:40.447825  446965 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:50:40.447912  446965 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:50:40.447999  446965 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:50:40.448108  446965 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:50:40.448208  446965 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:50:40.448266  446965 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:50:40.448345  446965 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:50:40.590735  446965 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:50:40.714139  446965 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 19:50:40.808334  446965 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:50:40.940687  446965 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:50:41.085266  446965 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:50:41.085840  446965 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:50:41.088415  446965 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:50:41.090229  446965 out.go:235]   - Booting up control plane ...
	I1030 19:50:41.090349  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:50:41.090466  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:50:41.090573  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:50:41.112262  446965 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:50:41.118809  446965 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:50:41.118919  446965 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:50:41.243915  446965 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 19:50:41.244093  446965 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 19:50:41.745362  446965 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.630697ms
	I1030 19:50:41.745513  446965 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 19:50:40.834005  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:42.834286  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:46.748431  446965 kubeadm.go:310] [api-check] The API server is healthy after 5.001587935s
	I1030 19:50:46.762271  446965 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 19:50:46.781785  446965 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 19:50:46.806338  446965 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 19:50:46.806613  446965 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-042402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 19:50:46.819762  446965 kubeadm.go:310] [bootstrap-token] Using token: k711fn.1we2gia9o31jm3ip
	I1030 19:50:46.821026  446965 out.go:235]   - Configuring RBAC rules ...
	I1030 19:50:46.821137  446965 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 19:50:46.827537  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 19:50:46.836653  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 19:50:46.844891  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 19:50:46.848423  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 19:50:46.851674  446965 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 19:50:47.157946  446965 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 19:50:47.615774  446965 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 19:50:48.154429  446965 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 19:50:48.159547  446965 kubeadm.go:310] 
	I1030 19:50:48.159636  446965 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 19:50:48.159648  446965 kubeadm.go:310] 
	I1030 19:50:48.159762  446965 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 19:50:48.159776  446965 kubeadm.go:310] 
	I1030 19:50:48.159806  446965 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 19:50:48.159880  446965 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 19:50:48.159934  446965 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 19:50:48.159944  446965 kubeadm.go:310] 
	I1030 19:50:48.160029  446965 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 19:50:48.160040  446965 kubeadm.go:310] 
	I1030 19:50:48.160123  446965 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 19:50:48.160154  446965 kubeadm.go:310] 
	I1030 19:50:48.160242  446965 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 19:50:48.160351  446965 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 19:50:48.160440  446965 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 19:50:48.160450  446965 kubeadm.go:310] 
	I1030 19:50:48.160570  446965 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 19:50:48.160652  446965 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 19:50:48.160660  446965 kubeadm.go:310] 
	I1030 19:50:48.160729  446965 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.160818  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 19:50:48.160838  446965 kubeadm.go:310] 	--control-plane 
	I1030 19:50:48.160846  446965 kubeadm.go:310] 
	I1030 19:50:48.160943  446965 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 19:50:48.160955  446965 kubeadm.go:310] 
	I1030 19:50:48.161065  446965 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.161205  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 19:50:48.162302  446965 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:48.162390  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:50:48.162408  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:50:48.164041  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:50:45.333255  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:47.334686  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:49.832993  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:48.165318  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:50:48.176702  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:50:48.199681  446965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:50:48.199776  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.199840  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-042402 minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=embed-certs-042402 minikube.k8s.io/primary=true
	I1030 19:50:48.226617  446965 ops.go:34] apiserver oom_adj: -16
	I1030 19:50:48.404620  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.905366  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.405663  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.904925  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.405082  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.905099  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.404860  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.905534  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.405432  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.905289  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:53.010770  446965 kubeadm.go:1113] duration metric: took 4.811061462s to wait for elevateKubeSystemPrivileges
	I1030 19:50:53.010818  446965 kubeadm.go:394] duration metric: took 5m1.251362756s to StartCluster
	I1030 19:50:53.010849  446965 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.010948  446965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:50:53.012997  446965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.013284  446965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:50:53.013411  446965 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:50:53.013518  446965 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-042402"
	I1030 19:50:53.013539  446965 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-042402"
	I1030 19:50:53.013539  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1030 19:50:53.013550  446965 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:50:53.013600  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013546  446965 addons.go:69] Setting default-storageclass=true in profile "embed-certs-042402"
	I1030 19:50:53.013605  446965 addons.go:69] Setting metrics-server=true in profile "embed-certs-042402"
	I1030 19:50:53.013635  446965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-042402"
	I1030 19:50:53.013642  446965 addons.go:234] Setting addon metrics-server=true in "embed-certs-042402"
	W1030 19:50:53.013650  446965 addons.go:243] addon metrics-server should already be in state true
	I1030 19:50:53.013675  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013947  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014005  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014010  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014022  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014058  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014112  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.015033  446965 out.go:177] * Verifying Kubernetes components...
	I1030 19:50:53.016527  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:50:53.030033  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I1030 19:50:53.030290  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1030 19:50:53.030618  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.030733  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.031192  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031209  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031342  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031356  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031577  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.031773  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.031801  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.032289  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1030 19:50:53.032910  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.032953  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.033170  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.033684  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.033699  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.035082  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.035104  446965 addons.go:234] Setting addon default-storageclass=true in "embed-certs-042402"
	W1030 19:50:53.035124  446965 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:50:53.035158  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.035461  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.035492  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.036666  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.036697  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.054685  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1030 19:50:53.055271  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.055621  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I1030 19:50:53.055762  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.055779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.056073  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.056192  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.056410  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.056665  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.056688  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.057099  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.057693  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.057741  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.058427  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.058756  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I1030 19:50:53.059684  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.060230  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.060253  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.060597  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.060806  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.060880  446965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:50:53.062367  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.062469  446965 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.062506  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:50:53.062526  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.063955  446965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:50:53.065131  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:50:53.065153  446965 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:50:53.065173  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.065987  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066607  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.066640  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066723  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.066956  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.067102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.067254  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.068475  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.068916  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.068939  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.069098  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.069288  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.069457  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.069625  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.075920  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1030 19:50:53.076341  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.076758  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.076779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.077042  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.077238  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.078809  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.079065  446965 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.079088  446965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:50:53.079105  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.081873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082309  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.082339  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082515  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.082705  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.082863  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.083061  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.274313  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:50:53.305281  446965 node_ready.go:35] waiting up to 6m0s for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313184  446965 node_ready.go:49] node "embed-certs-042402" has status "Ready":"True"
	I1030 19:50:53.313217  446965 node_ready.go:38] duration metric: took 7.892097ms for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313230  446965 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:53.321668  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:50:53.406960  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.427287  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:50:53.427324  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:50:53.475089  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.485983  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:50:53.486013  446965 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:50:53.570871  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:53.570904  446965 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:50:53.670898  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:54.545328  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.138329529s)
	I1030 19:50:54.545384  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545383  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.070259573s)
	I1030 19:50:54.545399  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545426  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545445  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545732  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545748  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545757  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545761  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545765  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545787  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545794  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545802  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545808  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.546139  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546162  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.546465  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.546468  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546507  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.576380  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.576408  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.576738  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.576787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.576804  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.703670  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032714873s)
	I1030 19:50:54.703724  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.703736  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704025  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.704059  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704076  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704085  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.704104  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704350  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704362  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704374  446965 addons.go:475] Verifying addon metrics-server=true in "embed-certs-042402"
	I1030 19:50:54.706330  446965 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:50:51.833654  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.333879  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.707723  446965 addons.go:510] duration metric: took 1.694322523s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:50:55.328470  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:57.828224  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:56.832967  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:58.833284  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:59.828636  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:01.828151  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.828178  446965 pod_ready.go:82] duration metric: took 8.506481998s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.828187  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833094  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.833121  446965 pod_ready.go:82] duration metric: took 4.926401ms for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833133  446965 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837391  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.837410  446965 pod_ready.go:82] duration metric: took 4.27047ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837419  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344200  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.344224  446965 pod_ready.go:82] duration metric: took 506.798667ms for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344233  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349020  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.349042  446965 pod_ready.go:82] duration metric: took 4.801739ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349055  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626109  446965 pod_ready.go:93] pod "kube-proxy-m9zwz" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.626137  446965 pod_ready.go:82] duration metric: took 277.074567ms for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626146  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027456  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:03.027482  446965 pod_ready.go:82] duration metric: took 401.329277ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027493  446965 pod_ready.go:39] duration metric: took 9.714247169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:03.027513  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:03.027579  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:03.043403  446965 api_server.go:72] duration metric: took 10.030078869s to wait for apiserver process to appear ...
	I1030 19:51:03.043431  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:03.043456  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:51:03.048722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:51:03.049572  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:03.049595  446965 api_server.go:131] duration metric: took 6.156928ms to wait for apiserver health ...
	I1030 19:51:03.049603  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:03.233170  446965 system_pods.go:59] 9 kube-system pods found
	I1030 19:51:03.233205  446965 system_pods.go:61] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.233212  446965 system_pods.go:61] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.233217  446965 system_pods.go:61] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.233222  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.233227  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.233231  446965 system_pods.go:61] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.233236  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.233247  446965 system_pods.go:61] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.233255  446965 system_pods.go:61] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.233272  446965 system_pods.go:74] duration metric: took 183.660307ms to wait for pod list to return data ...
	I1030 19:51:03.233287  446965 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:03.427520  446965 default_sa.go:45] found service account: "default"
	I1030 19:51:03.427550  446965 default_sa.go:55] duration metric: took 194.254547ms for default service account to be created ...
	I1030 19:51:03.427562  446965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:03.629316  446965 system_pods.go:86] 9 kube-system pods found
	I1030 19:51:03.629351  446965 system_pods.go:89] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.629364  446965 system_pods.go:89] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.629370  446965 system_pods.go:89] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.629377  446965 system_pods.go:89] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.629381  446965 system_pods.go:89] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.629386  446965 system_pods.go:89] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.629391  446965 system_pods.go:89] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.629399  446965 system_pods.go:89] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.629405  446965 system_pods.go:89] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.629418  446965 system_pods.go:126] duration metric: took 201.847233ms to wait for k8s-apps to be running ...
	I1030 19:51:03.629432  446965 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:03.629486  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:03.649120  446965 system_svc.go:56] duration metric: took 19.675022ms WaitForService to wait for kubelet
	I1030 19:51:03.649166  446965 kubeadm.go:582] duration metric: took 10.635844977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:03.649192  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:03.826763  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:03.826790  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:03.826803  446965 node_conditions.go:105] duration metric: took 177.604616ms to run NodePressure ...
	I1030 19:51:03.826819  446965 start.go:241] waiting for startup goroutines ...
	I1030 19:51:03.826827  446965 start.go:246] waiting for cluster config update ...
	I1030 19:51:03.826841  446965 start.go:255] writing updated cluster config ...
	I1030 19:51:03.827126  446965 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:03.877974  446965 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:03.880121  446965 out.go:177] * Done! kubectl is now configured to use "embed-certs-042402" cluster and "default" namespace by default
	I1030 19:51:00.833673  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:03.333042  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:05.333431  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:07.833229  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:09.833772  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:10.833131  446736 pod_ready.go:82] duration metric: took 4m0.006526983s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:51:10.833166  446736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:51:10.833178  446736 pod_ready.go:39] duration metric: took 4m7.416690025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:10.833200  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:10.833239  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:10.833300  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:10.884016  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:10.884046  446736 cri.go:89] found id: ""
	I1030 19:51:10.884055  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:10.884108  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.888789  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:10.888857  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:10.931994  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:10.932037  446736 cri.go:89] found id: ""
	I1030 19:51:10.932047  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:10.932097  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.937113  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:10.937181  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:10.977951  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:10.977982  446736 cri.go:89] found id: ""
	I1030 19:51:10.977993  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:10.978050  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.982791  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:10.982863  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:11.021741  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.021770  446736 cri.go:89] found id: ""
	I1030 19:51:11.021780  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:11.021837  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.026590  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:11.026653  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:11.068839  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.068873  446736 cri.go:89] found id: ""
	I1030 19:51:11.068885  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:11.068946  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.073103  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:11.073171  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:11.108404  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.108432  446736 cri.go:89] found id: ""
	I1030 19:51:11.108443  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:11.108506  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.112903  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:11.112974  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:11.153767  446736 cri.go:89] found id: ""
	I1030 19:51:11.153800  446736 logs.go:282] 0 containers: []
	W1030 19:51:11.153812  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:11.153821  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:11.153892  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:11.194649  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.194681  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.194687  446736 cri.go:89] found id: ""
	I1030 19:51:11.194697  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:11.194770  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.199037  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.202957  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:11.202984  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:11.246187  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:11.246220  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.286608  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:11.286643  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.339119  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:11.339157  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.376624  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:11.376653  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.411401  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:11.411431  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:11.481668  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:11.481710  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:11.497767  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:11.497799  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:11.612001  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:11.612034  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:11.656553  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:11.656589  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:11.695387  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:11.695428  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.732386  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:11.732419  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:12.217007  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:12.217056  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:14.769155  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:14.787096  446736 api_server.go:72] duration metric: took 4m17.097569041s to wait for apiserver process to appear ...
	I1030 19:51:14.787128  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:14.787176  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:14.787235  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:14.823506  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:14.823533  446736 cri.go:89] found id: ""
	I1030 19:51:14.823541  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:14.823595  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.828125  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:14.828214  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:14.867890  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:14.867914  446736 cri.go:89] found id: ""
	I1030 19:51:14.867922  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:14.867970  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.873213  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:14.873283  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:14.913068  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:14.913103  446736 cri.go:89] found id: ""
	I1030 19:51:14.913114  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:14.913179  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.918380  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:14.918459  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:14.956150  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:14.956177  446736 cri.go:89] found id: ""
	I1030 19:51:14.956187  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:14.956294  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.960781  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:14.960836  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:15.001804  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.001833  446736 cri.go:89] found id: ""
	I1030 19:51:15.001844  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:15.001893  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.006341  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:15.006401  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:15.045202  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.045236  446736 cri.go:89] found id: ""
	I1030 19:51:15.045247  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:15.045326  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.051967  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:15.052031  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:15.091569  446736 cri.go:89] found id: ""
	I1030 19:51:15.091596  446736 logs.go:282] 0 containers: []
	W1030 19:51:15.091604  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:15.091611  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:15.091668  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:15.135521  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:15.135551  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:15.135557  446736 cri.go:89] found id: ""
	I1030 19:51:15.135567  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:15.135633  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.140215  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.145490  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:15.145514  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:15.205939  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:15.205972  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:15.240157  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:15.240194  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.277168  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:15.277200  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:15.708451  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:15.708499  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:15.750544  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:15.750577  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:15.820071  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:15.820113  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:15.870259  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:15.870293  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:15.919968  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:15.919998  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.976948  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:15.976992  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:16.014451  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:16.014498  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:16.047766  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:16.047806  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:16.070539  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:16.070567  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:18.677834  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:51:18.682862  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:51:18.684023  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:18.684046  446736 api_server.go:131] duration metric: took 3.896911154s to wait for apiserver health ...
	I1030 19:51:18.684055  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:18.684083  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:18.684130  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:18.724815  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:18.724848  446736 cri.go:89] found id: ""
	I1030 19:51:18.724860  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:18.724928  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.729332  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:18.729391  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:18.767614  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:18.767642  446736 cri.go:89] found id: ""
	I1030 19:51:18.767651  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:18.767705  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.772420  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:18.772525  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:18.811459  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:18.811489  446736 cri.go:89] found id: ""
	I1030 19:51:18.811501  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:18.811563  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.816844  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:18.816906  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:18.853273  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:18.853299  446736 cri.go:89] found id: ""
	I1030 19:51:18.853308  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:18.853362  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.857867  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:18.857946  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:18.907021  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:18.907052  446736 cri.go:89] found id: ""
	I1030 19:51:18.907063  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:18.907126  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.913432  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:18.913506  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:18.978047  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:18.978072  446736 cri.go:89] found id: ""
	I1030 19:51:18.978083  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:18.978150  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.983158  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:18.983241  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:19.018992  446736 cri.go:89] found id: ""
	I1030 19:51:19.019018  446736 logs.go:282] 0 containers: []
	W1030 19:51:19.019026  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:19.019035  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:19.019094  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:19.053821  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.053850  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.053855  446736 cri.go:89] found id: ""
	I1030 19:51:19.053862  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:19.053922  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.063575  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.069254  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:19.069283  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:19.139641  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:19.139700  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:19.198020  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:19.198059  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:19.239685  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:19.239727  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:19.281510  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:19.281545  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.317842  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:19.317872  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:19.659645  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:19.659697  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:19.678087  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:19.678121  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:19.778504  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:19.778540  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:19.826520  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:19.826552  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:19.863959  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:19.864011  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:19.915777  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:19.915814  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.953036  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:19.953069  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:22.502129  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:51:22.502162  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.502167  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.502172  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.502175  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.502179  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.502182  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.502188  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.502193  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.502201  446736 system_pods.go:74] duration metric: took 3.818141259s to wait for pod list to return data ...
	I1030 19:51:22.502209  446736 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:22.504541  446736 default_sa.go:45] found service account: "default"
	I1030 19:51:22.504562  446736 default_sa.go:55] duration metric: took 2.346763ms for default service account to be created ...
	I1030 19:51:22.504570  446736 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:22.509016  446736 system_pods.go:86] 8 kube-system pods found
	I1030 19:51:22.509039  446736 system_pods.go:89] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.509044  446736 system_pods.go:89] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.509048  446736 system_pods.go:89] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.509052  446736 system_pods.go:89] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.509055  446736 system_pods.go:89] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.509058  446736 system_pods.go:89] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.509101  446736 system_pods.go:89] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.509112  446736 system_pods.go:89] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.509119  446736 system_pods.go:126] duration metric: took 4.544102ms to wait for k8s-apps to be running ...
	I1030 19:51:22.509125  446736 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:22.509172  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:22.524883  446736 system_svc.go:56] duration metric: took 15.747977ms WaitForService to wait for kubelet
	I1030 19:51:22.524906  446736 kubeadm.go:582] duration metric: took 4m24.835384605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:22.524929  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:22.528315  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:22.528334  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:22.528345  446736 node_conditions.go:105] duration metric: took 3.411421ms to run NodePressure ...
	I1030 19:51:22.528357  446736 start.go:241] waiting for startup goroutines ...
	I1030 19:51:22.528364  446736 start.go:246] waiting for cluster config update ...
	I1030 19:51:22.528374  446736 start.go:255] writing updated cluster config ...
	I1030 19:51:22.528621  446736 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:22.577143  446736 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:22.580061  446736 out.go:177] * Done! kubectl is now configured to use "no-preload-960512" cluster and "default" namespace by default
	I1030 19:52:15.582907  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:52:15.583009  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:52:15.584345  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:15.584419  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:15.584522  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:15.584659  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:15.584763  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:15.584827  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:15.586931  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:15.587016  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:15.587074  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:15.587145  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:15.587198  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:15.587271  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:15.587339  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:15.587402  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:15.587455  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:15.587517  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:15.587577  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:15.587608  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:15.587682  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:15.587759  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:15.587846  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:15.587924  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:15.587988  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:15.588076  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:15.588148  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:15.588180  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:15.588267  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:15.589722  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:15.589834  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:15.589932  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:15.590014  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:15.590128  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:15.590285  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:15.590336  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:15.590388  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590560  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590642  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590842  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590946  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591155  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591253  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591513  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591609  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591841  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591855  447486 kubeadm.go:310] 
	I1030 19:52:15.591900  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:52:15.591956  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:52:15.591966  447486 kubeadm.go:310] 
	I1030 19:52:15.592008  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:52:15.592051  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:52:15.592192  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:52:15.592204  447486 kubeadm.go:310] 
	I1030 19:52:15.592318  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:52:15.592360  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:52:15.592391  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:52:15.592397  447486 kubeadm.go:310] 
	I1030 19:52:15.592511  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:52:15.592592  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:52:15.592600  447486 kubeadm.go:310] 
	I1030 19:52:15.592733  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:52:15.592850  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:52:15.592959  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:52:15.593059  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:52:15.593138  447486 kubeadm.go:310] 
	W1030 19:52:15.593236  447486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:52:15.593289  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:52:16.049810  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:52:16.065820  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:52:16.076166  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:52:16.076192  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:52:16.076241  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:52:16.085309  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:52:16.085380  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:52:16.094868  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:52:16.104343  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:52:16.104395  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:52:16.113939  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.122836  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:52:16.122885  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.132083  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:52:16.141441  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:52:16.141487  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:52:16.150710  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:52:16.222070  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:16.222183  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:16.366061  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:16.366194  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:16.366352  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:16.541086  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:16.543200  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:16.543303  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:16.543398  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:16.543523  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:16.543625  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:16.543749  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:16.543848  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:16.543942  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:16.544020  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:16.544096  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:16.544193  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:16.544252  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:16.544343  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:16.637454  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:16.829430  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:16.985259  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:17.072312  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:17.092511  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:17.093595  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:17.093654  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:17.228039  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:17.229647  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:17.229766  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:17.237333  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:17.239644  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:17.239774  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:17.241037  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:57.243167  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:57.243769  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:57.244072  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:02.244240  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:02.244563  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:12.244991  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:12.245293  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:32.246428  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:32.246697  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.247834  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:54:12.248150  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.248173  447486 kubeadm.go:310] 
	I1030 19:54:12.248226  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:54:12.248308  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:54:12.248336  447486 kubeadm.go:310] 
	I1030 19:54:12.248386  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:54:12.248449  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:54:12.248598  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:54:12.248609  447486 kubeadm.go:310] 
	I1030 19:54:12.248747  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:54:12.248811  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:54:12.248867  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:54:12.248876  447486 kubeadm.go:310] 
	I1030 19:54:12.249013  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:54:12.249111  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:54:12.249129  447486 kubeadm.go:310] 
	I1030 19:54:12.249280  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:54:12.249447  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:54:12.249564  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:54:12.249662  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:54:12.249708  447486 kubeadm.go:310] 
	I1030 19:54:12.249878  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:54:12.250015  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:54:12.250208  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:54:12.250221  447486 kubeadm.go:394] duration metric: took 7m57.874179721s to StartCluster
	I1030 19:54:12.250311  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:54:12.250399  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:54:12.292692  447486 cri.go:89] found id: ""
	I1030 19:54:12.292749  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.292760  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:54:12.292770  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:54:12.292840  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:54:12.329792  447486 cri.go:89] found id: ""
	I1030 19:54:12.329825  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.329835  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:54:12.329843  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:54:12.329905  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:54:12.364661  447486 cri.go:89] found id: ""
	I1030 19:54:12.364693  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.364702  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:54:12.364709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:54:12.364764  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:54:12.400842  447486 cri.go:89] found id: ""
	I1030 19:54:12.400870  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.400878  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:54:12.400885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:54:12.400943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:54:12.440135  447486 cri.go:89] found id: ""
	I1030 19:54:12.440164  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.440172  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:54:12.440178  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:54:12.440228  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:54:12.476365  447486 cri.go:89] found id: ""
	I1030 19:54:12.476403  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.476416  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:54:12.476425  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:54:12.476503  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:54:12.519669  447486 cri.go:89] found id: ""
	I1030 19:54:12.519702  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.519715  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:54:12.519724  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:54:12.519791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:54:12.554180  447486 cri.go:89] found id: ""
	I1030 19:54:12.554218  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.554230  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:54:12.554244  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:54:12.554261  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:54:12.669617  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:54:12.669660  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:54:12.708361  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:54:12.708392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:54:12.763103  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:54:12.763145  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:54:12.778676  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:54:12.778712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:54:12.865694  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1030 19:54:12.865732  447486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:54:12.865797  447486 out.go:270] * 
	W1030 19:54:12.865908  447486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.865929  447486 out.go:270] * 
	W1030 19:54:12.867124  447486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:54:12.871111  447486 out.go:201] 
	W1030 19:54:12.872534  447486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.872591  447486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:54:12.872616  447486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:54:12.874145  447486 out.go:201] 
	
	
	==> CRI-O <==
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.924495397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318405924473261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a0575aa-d08d-49a5-bb85-5889a5bf0f1c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.925006246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db1dde11-c972-4927-a27a-38310eb2af07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.925064180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db1dde11-c972-4927-a27a-38310eb2af07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.925392712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db1dde11-c972-4927-a27a-38310eb2af07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.962024569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cffd34c-6419-40ff-adab-ed7c336f8185 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.962337765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cffd34c-6419-40ff-adab-ed7c336f8185 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.963278321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a394990d-90fd-40fd-be19-7518903eedfc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.963809908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318405963789444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a394990d-90fd-40fd-be19-7518903eedfc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.964282599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f66f28c6-178c-4256-b1a0-c18b47a88dbf name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.964349943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f66f28c6-178c-4256-b1a0-c18b47a88dbf name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:05 embed-certs-042402 crio[720]: time="2024-10-30 20:00:05.964537854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f66f28c6-178c-4256-b1a0-c18b47a88dbf name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.010995269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b81631a-a00a-40b0-921b-bc52b680e744 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.011285596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b81631a-a00a-40b0-921b-bc52b680e744 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.014599515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccbcb597-219d-4e2d-b6e5-be81f69d3942 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.015234303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318406015211922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccbcb597-219d-4e2d-b6e5-be81f69d3942 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.015739504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a05b892-3302-474d-aca6-9cbb14f7193e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.015811417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a05b892-3302-474d-aca6-9cbb14f7193e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.016024932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a05b892-3302-474d-aca6-9cbb14f7193e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.046987229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a080d18-3502-4570-a677-1d944b1b96ca name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.047061097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a080d18-3502-4570-a677-1d944b1b96ca name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.048346628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=512ce894-7a58-403d-b0f3-c112da10630f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.048755912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318406048732208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=512ce894-7a58-403d-b0f3-c112da10630f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.049374606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c67acf12-0d38-4eb8-b571-ad5c886a5967 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.049462962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c67acf12-0d38-4eb8-b571-ad5c886a5967 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:06 embed-certs-042402 crio[720]: time="2024-10-30 20:00:06.049668975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c67acf12-0d38-4eb8-b571-ad5c886a5967 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e6cc7d4df0e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   37da61aca6f68       storage-provisioner
	c5f74c108f82b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   851a935cd845b       coredns-7c65d6cfc9-pzbpd
	eace9317a4bc7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a56f02ca0c0cf       coredns-7c65d6cfc9-hvg4g
	09f26f80fafe4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   a6d8ba30e9d3d       kube-proxy-m9zwz
	1f4743cfe95c8       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   0b07dad9e29e8       kube-scheduler-embed-certs-042402
	d23071dddfccc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   ea0cbb84555bc       kube-controller-manager-embed-certs-042402
	9d09f07a6c8f7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   d680408625033       etcd-embed-certs-042402
	5b6cf7bbc2230       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   ab0957dbbda0b       kube-apiserver-embed-certs-042402
	1dfb8854a7f88       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   f96d997ce5136       kube-apiserver-embed-certs-042402
	
	
	==> coredns [c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-042402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-042402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=embed-certs-042402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 19:50:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-042402
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 19:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 19:56:03 +0000   Wed, 30 Oct 2024 19:50:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 19:56:03 +0000   Wed, 30 Oct 2024 19:50:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 19:56:03 +0000   Wed, 30 Oct 2024 19:50:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 19:56:03 +0000   Wed, 30 Oct 2024 19:50:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.235
	  Hostname:    embed-certs-042402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b38f1898611467081180a343ba5f2f3
	  System UUID:                6b38f189-8611-4670-8118-0a343ba5f2f3
	  Boot ID:                    cb97e997-3bf1-43f8-aad2-b3cee029cc5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-hvg4g                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-pzbpd                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-embed-certs-042402                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-embed-certs-042402             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-042402    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-m9zwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-042402             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-6hrq4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node embed-certs-042402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node embed-certs-042402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node embed-certs-042402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node embed-certs-042402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node embed-certs-042402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node embed-certs-042402 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s                  node-controller  Node embed-certs-042402 event: Registered Node embed-certs-042402 in Controller
	
	
	==> dmesg <==
	[  +0.058977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039851] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982106] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.555406] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.247473] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.060467] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066660] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.200258] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.197130] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.317090] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.259400] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.060185] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.420129] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +4.592050] kauditd_printk_skb: 97 callbacks suppressed
	[Oct30 19:46] kauditd_printk_skb: 85 callbacks suppressed
	[Oct30 19:50] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.255624] systemd-fstab-generator[2605]: Ignoring "noauto" option for root device
	[  +4.574996] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.470330] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +5.920576] systemd-fstab-generator[3081]: Ignoring "noauto" option for root device
	[  +0.025536] kauditd_printk_skb: 14 callbacks suppressed
	[Oct30 19:51] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996] <==
	{"level":"info","ts":"2024-10-30T19:50:42.830752Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-30T19:50:42.830793Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-10-30T19:50:42.838129Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-10-30T19:50:42.832681Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5c9ce5d2cd86398f","initial-advertise-peer-urls":["https://192.168.61.235:2380"],"listen-peer-urls":["https://192.168.61.235:2380"],"advertise-client-urls":["https://192.168.61.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-30T19:50:42.832754Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-30T19:50:43.023265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-30T19:50:43.023374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-30T19:50:43.023403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgPreVoteResp from 5c9ce5d2cd86398f at term 1"}
	{"level":"info","ts":"2024-10-30T19:50:43.023417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became candidate at term 2"}
	{"level":"info","ts":"2024-10-30T19:50:43.023495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgVoteResp from 5c9ce5d2cd86398f at term 2"}
	{"level":"info","ts":"2024-10-30T19:50:43.023506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became leader at term 2"}
	{"level":"info","ts":"2024-10-30T19:50:43.023513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5c9ce5d2cd86398f elected leader 5c9ce5d2cd86398f at term 2"}
	{"level":"info","ts":"2024-10-30T19:50:43.028614Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5c9ce5d2cd86398f","local-member-attributes":"{Name:embed-certs-042402 ClientURLs:[https://192.168.61.235:2379]}","request-path":"/0/members/5c9ce5d2cd86398f/attributes","cluster-id":"d507c5522fd9f0c3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-30T19:50:43.028736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:50:43.029178Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T19:50:43.033176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:50:43.033870Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:50:43.039879Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-30T19:50:43.040884Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:50:43.072561Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.235:2379"}
	{"level":"info","ts":"2024-10-30T19:50:43.034167Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-30T19:50:43.072799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-30T19:50:43.067161Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T19:50:43.073024Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T19:50:43.073130Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:00:06 up 14 min,  0 users,  load average: 0.15, 0.14, 0.10
	Linux embed-certs-042402 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79] <==
	W1030 19:50:34.917966       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.927756       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.936582       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.937973       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.967965       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.045802       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.070475       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.103530       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.113277       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.152356       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.187501       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.208716       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.322581       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.399923       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.465480       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.631534       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:38.310591       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:38.649421       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:38.835846       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.013431       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.095769       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.213536       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.493627       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.702526       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.709989       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0] <==
	W1030 19:55:45.975338       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:55:45.975420       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1030 19:55:45.976467       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:55:45.976516       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 19:56:45.977573       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:56:45.977895       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 19:56:45.977975       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:56:45.978014       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 19:56:45.979203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:56:45.979280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 19:58:45.980263       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:58:45.980441       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 19:58:45.980263       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:58:45.980484       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 19:58:45.981599       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:58:45.981631       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d] <==
	E1030 19:54:51.935727       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:54:52.386885       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:55:21.942904       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:55:22.396318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:55:51.954619       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:55:52.404604       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 19:56:03.923622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-042402"
	E1030 19:56:21.962608       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:56:22.412212       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:56:51.969926       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:56:52.421203       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 19:56:55.520946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="237.975µs"
	I1030 19:57:08.515864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="138.037µs"
	E1030 19:57:21.977134       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:57:22.431304       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:57:51.989419       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:57:52.440545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:58:21.995675       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:58:22.447326       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:58:52.002422       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:58:52.455033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:59:22.008425       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:59:22.462712       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:59:52.018908       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:59:52.470641       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 19:50:53.723570       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 19:50:53.744038       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.235"]
	E1030 19:50:53.744143       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 19:50:53.873959       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 19:50:53.873993       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 19:50:53.874029       1 server_linux.go:169] "Using iptables Proxier"
	I1030 19:50:53.877302       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 19:50:53.879299       1 server.go:483] "Version info" version="v1.31.2"
	I1030 19:50:53.879486       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:50:53.883582       1 config.go:199] "Starting service config controller"
	I1030 19:50:53.884494       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 19:50:53.884573       1 config.go:105] "Starting endpoint slice config controller"
	I1030 19:50:53.884579       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 19:50:53.893791       1 config.go:328] "Starting node config controller"
	I1030 19:50:53.893811       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 19:50:53.987208       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 19:50:53.987251       1 shared_informer.go:320] Caches are synced for service config
	I1030 19:50:53.994607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287] <==
	E1030 19:50:44.967929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:44.965818       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 19:50:44.967980       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1030 19:50:44.966020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:44.968030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:44.966163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1030 19:50:44.968143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:44.966932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 19:50:44.968197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1030 19:50:44.968247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:45.956940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1030 19:50:45.957013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.033423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.033518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.039888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.039978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.116800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 19:50:46.116926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.122530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.122645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.129415       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 19:50:46.129456       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1030 19:50:46.170404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.170711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1030 19:50:47.858553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 19:58:49 embed-certs-042402 kubelet[2932]: E1030 19:58:49.500228    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 19:58:57 embed-certs-042402 kubelet[2932]: E1030 19:58:57.667901    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318337666805120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:58:57 embed-certs-042402 kubelet[2932]: E1030 19:58:57.667952    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318337666805120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:02 embed-certs-042402 kubelet[2932]: E1030 19:59:02.500586    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 19:59:07 embed-certs-042402 kubelet[2932]: E1030 19:59:07.670612    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318347670044566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:07 embed-certs-042402 kubelet[2932]: E1030 19:59:07.670662    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318347670044566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:14 embed-certs-042402 kubelet[2932]: E1030 19:59:14.501895    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 19:59:17 embed-certs-042402 kubelet[2932]: E1030 19:59:17.674282    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318357673372012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:17 embed-certs-042402 kubelet[2932]: E1030 19:59:17.675043    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318357673372012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:27 embed-certs-042402 kubelet[2932]: E1030 19:59:27.676480    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318367676211369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:27 embed-certs-042402 kubelet[2932]: E1030 19:59:27.676530    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318367676211369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:29 embed-certs-042402 kubelet[2932]: E1030 19:59:29.503392    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 19:59:37 embed-certs-042402 kubelet[2932]: E1030 19:59:37.678122    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318377677788762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:37 embed-certs-042402 kubelet[2932]: E1030 19:59:37.678421    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318377677788762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:42 embed-certs-042402 kubelet[2932]: E1030 19:59:42.500633    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 19:59:47 embed-certs-042402 kubelet[2932]: E1030 19:59:47.527455    2932 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 19:59:47 embed-certs-042402 kubelet[2932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 19:59:47 embed-certs-042402 kubelet[2932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 19:59:47 embed-certs-042402 kubelet[2932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 19:59:47 embed-certs-042402 kubelet[2932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 19:59:47 embed-certs-042402 kubelet[2932]: E1030 19:59:47.680155    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318387679320162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:47 embed-certs-042402 kubelet[2932]: E1030 19:59:47.680197    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318387679320162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:55 embed-certs-042402 kubelet[2932]: E1030 19:59:55.502525    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 19:59:57 embed-certs-042402 kubelet[2932]: E1030 19:59:57.685804    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318397685590363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:57 embed-certs-042402 kubelet[2932]: E1030 19:59:57.685845    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318397685590363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8] <==
	I1030 19:50:55.134044       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 19:50:55.142778       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 19:50:55.143208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 19:50:55.153026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 19:50:55.153984       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-042402_fc026841-e592-4d89-8391-54aa6923c56d!
	I1030 19:50:55.157063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2a93d23-4155-4c88-9cb9-f90384df7a5c", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-042402_fc026841-e592-4d89-8391-54aa6923c56d became leader
	I1030 19:50:55.254775       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-042402_fc026841-e592-4d89-8391-54aa6923c56d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-042402 -n embed-certs-042402
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-042402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-6hrq4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-042402 describe pod metrics-server-6867b74b74-6hrq4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-042402 describe pod metrics-server-6867b74b74-6hrq4: exit status 1 (61.464848ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-6hrq4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-042402 describe pod metrics-server-6867b74b74-6hrq4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1030 19:51:57.604649  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:52:33.737463  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:53:17.243528  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:53:20.671625  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:53:52.514508  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:53:56.800636  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-960512 -n no-preload-960512
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-30 20:00:23.121689483 +0000 UTC m=+5979.868873028
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-960512 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-960512 logs -n 25: (2.088656157s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-534248 sudo cat                              | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo find                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo crio                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-534248                                       | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:42:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:42:11.799298  447486 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:42:11.799434  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799444  447486 out.go:358] Setting ErrFile to fd 2...
	I1030 19:42:11.799448  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799628  447486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:42:11.800193  447486 out.go:352] Setting JSON to false
	I1030 19:42:11.801205  447486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12275,"bootTime":1730305057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:42:11.801318  447486 start.go:139] virtualization: kvm guest
	I1030 19:42:11.803677  447486 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:42:11.805274  447486 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:42:11.805300  447486 notify.go:220] Checking for updates...
	I1030 19:42:11.808043  447486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:42:11.809440  447486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:42:11.810604  447486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:42:11.811774  447486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:42:11.812958  447486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:42:11.814552  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:42:11.814994  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.815077  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.830315  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1030 19:42:11.830795  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.831345  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.831365  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.831692  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.831869  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.833718  447486 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:42:11.835019  447486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:42:11.835371  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.835416  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.850097  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1030 19:42:11.850532  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.850964  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.850978  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.851321  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.851541  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.886920  447486 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:42:11.888376  447486 start.go:297] selected driver: kvm2
	I1030 19:42:11.888392  447486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.888538  447486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:42:11.889472  447486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.889560  447486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:42:11.904007  447486 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:42:11.904405  447486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:42:11.904443  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:42:11.904494  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:42:11.904549  447486 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.904661  447486 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.907302  447486 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:42:10.622770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:11.908430  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:42:11.908474  447486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:42:11.908485  447486 cache.go:56] Caching tarball of preloaded images
	I1030 19:42:11.908564  447486 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:42:11.908575  447486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:42:11.908666  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:42:11.908832  447486 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:42:16.702732  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:19.774825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:25.854777  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:28.926846  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:35.006934  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:38.078752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:44.158848  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:47.230843  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:53.310763  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:56.382772  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:02.462818  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:05.534754  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:11.614801  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:14.686762  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:20.766767  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:23.838853  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:29.918782  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:32.990752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:39.070771  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:42.142716  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:48.222814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:51.294775  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:57.374780  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:00.446825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:06.526810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:09.598813  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:15.678770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:18.750751  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:24.830814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:27.902810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:33.982759  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:37.054791  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:43.134706  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:46.206802  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:52.286830  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:55.358809  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:01.438753  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:04.510854  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:07.515699  446887 start.go:364] duration metric: took 4m29.000646378s to acquireMachinesLock for "default-k8s-diff-port-768989"
	I1030 19:45:07.515764  446887 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:07.515773  446887 fix.go:54] fixHost starting: 
	I1030 19:45:07.516191  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:07.516238  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:07.532374  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I1030 19:45:07.532907  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:07.533433  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:07.533459  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:07.533790  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:07.534016  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:07.534220  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:07.535802  446887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-768989: state=Stopped err=<nil>
	I1030 19:45:07.535842  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	W1030 19:45:07.536016  446887 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:07.537809  446887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-768989" ...
	I1030 19:45:07.539184  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Start
	I1030 19:45:07.539361  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring networks are active...
	I1030 19:45:07.540025  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network default is active
	I1030 19:45:07.540408  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network mk-default-k8s-diff-port-768989 is active
	I1030 19:45:07.540867  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Getting domain xml...
	I1030 19:45:07.541489  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Creating domain...
	I1030 19:45:07.512810  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:07.512848  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513191  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:45:07.513223  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513458  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:45:07.515538  446736 machine.go:96] duration metric: took 4m37.420773403s to provisionDockerMachine
	I1030 19:45:07.515594  446736 fix.go:56] duration metric: took 4m37.443968478s for fixHost
	I1030 19:45:07.515600  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 4m37.443992524s
	W1030 19:45:07.515625  446736 start.go:714] error starting host: provision: host is not running
	W1030 19:45:07.515753  446736 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1030 19:45:07.515763  446736 start.go:729] Will try again in 5 seconds ...
	I1030 19:45:08.756310  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting to get IP...
	I1030 19:45:08.757242  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757624  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757747  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.757629  448092 retry.go:31] will retry after 202.103853ms: waiting for machine to come up
	I1030 19:45:08.961147  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961660  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961685  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.961606  448092 retry.go:31] will retry after 243.456761ms: waiting for machine to come up
	I1030 19:45:09.207134  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207539  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207582  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.207493  448092 retry.go:31] will retry after 375.017051ms: waiting for machine to come up
	I1030 19:45:09.584058  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584428  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.584373  448092 retry.go:31] will retry after 552.476692ms: waiting for machine to come up
	I1030 19:45:10.137989  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138421  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.138358  448092 retry.go:31] will retry after 560.865483ms: waiting for machine to come up
	I1030 19:45:10.700603  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700968  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.700920  448092 retry.go:31] will retry after 680.400693ms: waiting for machine to come up
	I1030 19:45:11.382861  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383336  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383362  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:11.383274  448092 retry.go:31] will retry after 787.136113ms: waiting for machine to come up
	I1030 19:45:12.171550  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171910  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171938  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:12.171853  448092 retry.go:31] will retry after 1.176474969s: waiting for machine to come up
	I1030 19:45:13.349617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350080  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350114  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:13.350042  448092 retry.go:31] will retry after 1.211573437s: waiting for machine to come up
	I1030 19:45:12.517265  446736 start.go:360] acquireMachinesLock for no-preload-960512: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:45:14.563397  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563805  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:14.563749  448092 retry.go:31] will retry after 1.625938777s: waiting for machine to come up
	I1030 19:45:16.191798  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192226  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192255  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:16.192188  448092 retry.go:31] will retry after 2.442949682s: waiting for machine to come up
	I1030 19:45:18.636342  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636768  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636812  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:18.636748  448092 retry.go:31] will retry after 2.48415211s: waiting for machine to come up
	I1030 19:45:21.124407  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124892  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124919  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:21.124843  448092 retry.go:31] will retry after 3.392637796s: waiting for machine to come up
	I1030 19:45:25.815539  446965 start.go:364] duration metric: took 4m42.694254153s to acquireMachinesLock for "embed-certs-042402"
	I1030 19:45:25.815623  446965 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:25.815635  446965 fix.go:54] fixHost starting: 
	I1030 19:45:25.816068  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:25.816232  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:25.833218  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 19:45:25.833610  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:25.834159  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:45:25.834191  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:25.834567  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:25.834777  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:25.834920  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:45:25.836507  446965 fix.go:112] recreateIfNeeded on embed-certs-042402: state=Stopped err=<nil>
	I1030 19:45:25.836532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	W1030 19:45:25.836711  446965 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:25.839078  446965 out.go:177] * Restarting existing kvm2 VM for "embed-certs-042402" ...
	I1030 19:45:24.519725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520072  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Found IP for machine: 192.168.39.92
	I1030 19:45:24.520091  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserving static IP address...
	I1030 19:45:24.520113  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has current primary IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520507  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.520521  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserved static IP address: 192.168.39.92
	I1030 19:45:24.520535  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | skip adding static IP to network mk-default-k8s-diff-port-768989 - found existing host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"}
	I1030 19:45:24.520545  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for SSH to be available...
	I1030 19:45:24.520560  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Getting to WaitForSSH function...
	I1030 19:45:24.522776  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523095  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.523127  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523209  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH client type: external
	I1030 19:45:24.523229  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa (-rw-------)
	I1030 19:45:24.523262  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:24.523283  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | About to run SSH command:
	I1030 19:45:24.523298  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | exit 0
	I1030 19:45:24.646297  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:24.646826  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetConfigRaw
	I1030 19:45:24.647589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:24.650093  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650532  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.650564  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650790  446887 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/config.json ...
	I1030 19:45:24.650984  446887 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:24.651005  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:24.651232  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.653396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653751  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.653781  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.654084  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654263  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.654677  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.654922  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.654935  446887 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:24.762586  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:24.762621  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.762898  446887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-768989"
	I1030 19:45:24.762936  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.763250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.765937  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766265  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.766289  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766398  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.766599  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766762  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766920  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.767087  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.767257  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.767269  446887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-768989 && echo "default-k8s-diff-port-768989" | sudo tee /etc/hostname
	I1030 19:45:24.888742  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-768989
	
	I1030 19:45:24.888771  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.891326  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891638  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.891691  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891804  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.892018  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892154  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892281  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.892498  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.892692  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.892716  446887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-768989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-768989/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-768989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:25.012173  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:25.012214  446887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:25.012240  446887 buildroot.go:174] setting up certificates
	I1030 19:45:25.012250  446887 provision.go:84] configureAuth start
	I1030 19:45:25.012280  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:25.012598  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.015106  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015430  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.015458  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.017810  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018099  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.018136  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018230  446887 provision.go:143] copyHostCerts
	I1030 19:45:25.018322  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:25.018334  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:25.018401  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:25.018553  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:25.018566  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:25.018634  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:25.018716  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:25.018724  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:25.018748  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:25.018798  446887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-768989 san=[127.0.0.1 192.168.39.92 default-k8s-diff-port-768989 localhost minikube]
	I1030 19:45:25.188186  446887 provision.go:177] copyRemoteCerts
	I1030 19:45:25.188246  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:25.188285  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.190995  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.191344  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191525  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.191718  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.191875  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.191991  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.277273  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1030 19:45:25.300302  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:45:25.322919  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:25.347214  446887 provision.go:87] duration metric: took 334.947897ms to configureAuth
	I1030 19:45:25.347246  446887 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:25.347432  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:25.347510  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.349988  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350294  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.350324  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350500  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.350704  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.350836  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.351015  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.351210  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.351421  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.351436  446887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:25.576481  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:25.576509  446887 machine.go:96] duration metric: took 925.509257ms to provisionDockerMachine
	I1030 19:45:25.576525  446887 start.go:293] postStartSetup for "default-k8s-diff-port-768989" (driver="kvm2")
	I1030 19:45:25.576562  446887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:25.576589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.576923  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:25.576951  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.579498  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579825  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.579841  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579980  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.580151  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.580320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.580453  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.665032  446887 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:25.669402  446887 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:25.669430  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:25.669500  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:25.669573  446887 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:25.669665  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:25.679070  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:25.703131  446887 start.go:296] duration metric: took 126.586543ms for postStartSetup
	I1030 19:45:25.703194  446887 fix.go:56] duration metric: took 18.187420989s for fixHost
	I1030 19:45:25.703217  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.705911  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706365  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.706396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706609  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.706800  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.706944  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.707052  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.707188  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.707428  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.707443  446887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:25.815370  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317525.786848764
	
	I1030 19:45:25.815406  446887 fix.go:216] guest clock: 1730317525.786848764
	I1030 19:45:25.815414  446887 fix.go:229] Guest: 2024-10-30 19:45:25.786848764 +0000 UTC Remote: 2024-10-30 19:45:25.703198163 +0000 UTC m=+287.327380555 (delta=83.650601ms)
	I1030 19:45:25.815439  446887 fix.go:200] guest clock delta is within tolerance: 83.650601ms
	I1030 19:45:25.815445  446887 start.go:83] releasing machines lock for "default-k8s-diff-port-768989", held for 18.299702226s
	I1030 19:45:25.815467  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.815737  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.818508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818851  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.818889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818987  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819477  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819671  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819808  446887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:25.819862  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.819900  446887 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:25.819930  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.822372  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.822754  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822774  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822887  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823109  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.823168  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.823330  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823429  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823506  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.823605  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823758  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823880  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.903488  446887 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:25.931046  446887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:26.077178  446887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:26.084282  446887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:26.084358  446887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:26.100869  446887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:26.100893  446887 start.go:495] detecting cgroup driver to use...
	I1030 19:45:26.100984  446887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:26.117006  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:26.130102  446887 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:26.130184  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:26.148540  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:26.163003  446887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:26.286433  446887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:26.444862  446887 docker.go:233] disabling docker service ...
	I1030 19:45:26.444931  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:26.460606  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:26.477159  446887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:26.600212  446887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:26.725587  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:26.741934  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:26.761815  446887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:26.761872  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.772368  446887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:26.772422  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.784279  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.795403  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.806323  446887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:26.821929  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.836574  446887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.857305  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.868135  446887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:26.878058  446887 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:26.878138  446887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:26.891979  446887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:26.902181  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:27.021858  446887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:27.118890  446887 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:27.118985  446887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:27.125407  446887 start.go:563] Will wait 60s for crictl version
	I1030 19:45:27.125472  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:45:27.129507  446887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:27.176630  446887 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:27.176739  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.205818  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.236431  446887 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:25.840689  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Start
	I1030 19:45:25.840860  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring networks are active...
	I1030 19:45:25.841604  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network default is active
	I1030 19:45:25.841928  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network mk-embed-certs-042402 is active
	I1030 19:45:25.842443  446965 main.go:141] libmachine: (embed-certs-042402) Getting domain xml...
	I1030 19:45:25.843267  446965 main.go:141] libmachine: (embed-certs-042402) Creating domain...
	I1030 19:45:27.094878  446965 main.go:141] libmachine: (embed-certs-042402) Waiting to get IP...
	I1030 19:45:27.095705  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.096101  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.096166  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.096079  448226 retry.go:31] will retry after 190.217394ms: waiting for machine to come up
	I1030 19:45:27.287473  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.287940  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.287966  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.287899  448226 retry.go:31] will retry after 365.943545ms: waiting for machine to come up
	I1030 19:45:27.655952  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.656374  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.656425  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.656343  448226 retry.go:31] will retry after 345.369581ms: waiting for machine to come up
	I1030 19:45:28.003856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.004367  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.004398  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.004319  448226 retry.go:31] will retry after 609.6218ms: waiting for machine to come up
	I1030 19:45:27.237629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:27.240387  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240733  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:27.240779  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240995  446887 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:27.245263  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:27.261305  446887 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:27.261440  446887 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:27.261489  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:27.301593  446887 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:27.301650  446887 ssh_runner.go:195] Run: which lz4
	I1030 19:45:27.305829  446887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:27.310384  446887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:27.310413  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:28.615219  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.615769  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.615795  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.615716  448226 retry.go:31] will retry after 672.090411ms: waiting for machine to come up
	I1030 19:45:29.289646  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:29.290179  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:29.290216  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:29.290105  448226 retry.go:31] will retry after 865.239242ms: waiting for machine to come up
	I1030 19:45:30.157223  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.157650  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.157679  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.157616  448226 retry.go:31] will retry after 833.557181ms: waiting for machine to come up
	I1030 19:45:30.993139  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.993663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.993720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.993625  448226 retry.go:31] will retry after 989.333841ms: waiting for machine to come up
	I1030 19:45:31.983978  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:31.984498  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:31.984546  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:31.984443  448226 retry.go:31] will retry after 1.534311856s: waiting for machine to come up
	I1030 19:45:28.730765  446887 crio.go:462] duration metric: took 1.424975563s to copy over tarball
	I1030 19:45:28.730868  446887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:30.907494  446887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1765829s)
	I1030 19:45:30.907536  446887 crio.go:469] duration metric: took 2.176738354s to extract the tarball
	I1030 19:45:30.907546  446887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:30.944242  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:30.986812  446887 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:30.986839  446887 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:30.986872  446887 kubeadm.go:934] updating node { 192.168.39.92 8444 v1.31.2 crio true true} ...
	I1030 19:45:30.987042  446887 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-768989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:30.987145  446887 ssh_runner.go:195] Run: crio config
	I1030 19:45:31.037466  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:31.037496  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:31.037511  446887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:31.037544  446887 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-768989 NodeName:default-k8s-diff-port-768989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:31.037735  446887 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-768989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:31.037815  446887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:31.047808  446887 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:31.047885  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:31.057074  446887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1030 19:45:31.073022  446887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:31.088919  446887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1030 19:45:31.105357  446887 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:31.109207  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:31.121329  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:31.234078  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:31.251028  446887 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989 for IP: 192.168.39.92
	I1030 19:45:31.251057  446887 certs.go:194] generating shared ca certs ...
	I1030 19:45:31.251080  446887 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:31.251287  446887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:31.251342  446887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:31.251354  446887 certs.go:256] generating profile certs ...
	I1030 19:45:31.251480  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/client.key
	I1030 19:45:31.251567  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key.eeeafde8
	I1030 19:45:31.251620  446887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key
	I1030 19:45:31.251788  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:31.251834  446887 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:31.251848  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:31.251888  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:31.251931  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:31.251963  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:31.252024  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:31.253127  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:31.293822  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:31.334804  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:31.366955  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:31.396042  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 19:45:31.428748  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1030 19:45:31.452866  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:31.476407  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:45:31.500375  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:31.523909  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:31.547532  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:31.571163  446887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:31.587969  446887 ssh_runner.go:195] Run: openssl version
	I1030 19:45:31.593866  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:31.604538  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609348  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609419  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.615446  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:31.626640  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:31.640948  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646702  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646751  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.654365  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:31.668538  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:31.679201  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683631  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683693  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.689362  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:31.699804  446887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:31.704445  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:31.710558  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:31.718563  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:31.724745  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:31.731125  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:31.736828  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:31.742434  446887 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:31.742604  446887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:31.742654  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.779319  446887 cri.go:89] found id: ""
	I1030 19:45:31.779416  446887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:31.789556  446887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:31.789576  446887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:31.789622  446887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:31.799817  446887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:31.800824  446887 kubeconfig.go:125] found "default-k8s-diff-port-768989" server: "https://192.168.39.92:8444"
	I1030 19:45:31.803207  446887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:31.812876  446887 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I1030 19:45:31.812909  446887 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:31.812924  446887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:31.812984  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.858070  446887 cri.go:89] found id: ""
	I1030 19:45:31.858174  446887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:31.874923  446887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:31.885243  446887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:31.885275  446887 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:31.885321  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1030 19:45:31.894394  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:31.894453  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:31.903760  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1030 19:45:31.912344  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:31.912410  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:31.921458  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.930426  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:31.930499  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.940008  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1030 19:45:31.949578  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:31.949645  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:31.959022  446887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:31.968457  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.069017  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.985574  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.191887  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.273266  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.400584  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:33.400686  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:33.520596  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:33.521020  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:33.521041  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:33.520992  448226 retry.go:31] will retry after 1.787777673s: waiting for machine to come up
	I1030 19:45:35.310399  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:35.310878  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:35.310906  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:35.310833  448226 retry.go:31] will retry after 2.264310439s: waiting for machine to come up
	I1030 19:45:37.577787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:37.578276  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:37.578310  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:37.578214  448226 retry.go:31] will retry after 2.384410161s: waiting for machine to come up
	I1030 19:45:33.901397  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.400978  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.901476  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.401772  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.420824  446887 api_server.go:72] duration metric: took 2.020238714s to wait for apiserver process to appear ...
	I1030 19:45:35.420862  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:35.420889  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.795897  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.795931  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.795948  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.848032  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.848069  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.921286  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.930778  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:37.930822  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.421866  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.429247  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.429291  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.921655  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.928650  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.928680  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:39.421195  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:39.425565  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:45:39.433509  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:39.433543  446887 api_server.go:131] duration metric: took 4.01267362s to wait for apiserver health ...
	I1030 19:45:39.433555  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:39.433564  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:39.435645  446887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:39.437042  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:39.456091  446887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:39.477617  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:39.485998  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:39.486041  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:39.486051  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:39.486061  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:39.486071  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:39.486082  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:45:39.486087  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:39.486092  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:39.486095  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:45:39.486101  446887 system_pods.go:74] duration metric: took 8.467537ms to wait for pod list to return data ...
	I1030 19:45:39.486110  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:39.490771  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:39.490793  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:39.490805  446887 node_conditions.go:105] duration metric: took 4.690594ms to run NodePressure ...
	I1030 19:45:39.490821  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:39.752369  446887 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757080  446887 kubeadm.go:739] kubelet initialised
	I1030 19:45:39.757105  446887 kubeadm.go:740] duration metric: took 4.707251ms waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757114  446887 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:39.762374  446887 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.766904  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766934  446887 pod_ready.go:82] duration metric: took 4.529466ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.766948  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766958  446887 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.771681  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771705  446887 pod_ready.go:82] duration metric: took 4.73772ms for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.771715  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771722  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.776170  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776199  446887 pod_ready.go:82] duration metric: took 4.470353ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.776211  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776220  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.881949  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.881988  446887 pod_ready.go:82] duration metric: took 105.756203ms for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.882027  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.882042  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.281665  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281703  446887 pod_ready.go:82] duration metric: took 399.651747ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.281716  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281725  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.680827  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680861  446887 pod_ready.go:82] duration metric: took 399.128654ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.680873  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680883  446887 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:41.086176  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086203  446887 pod_ready.go:82] duration metric: took 405.311117ms for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:41.086216  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086225  446887 pod_ready.go:39] duration metric: took 1.32910228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:41.086246  446887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:45:41.100836  446887 ops.go:34] apiserver oom_adj: -16
	I1030 19:45:41.100871  446887 kubeadm.go:597] duration metric: took 9.31128777s to restartPrimaryControlPlane
	I1030 19:45:41.100887  446887 kubeadm.go:394] duration metric: took 9.358460424s to StartCluster
	I1030 19:45:41.100915  446887 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.101046  446887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:45:41.103578  446887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.103910  446887 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:45:41.103995  446887 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:45:41.104111  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:41.104131  446887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104151  446887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104159  446887 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:45:41.104175  446887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104198  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104207  446887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104218  446887 addons.go:243] addon metrics-server should already be in state true
	I1030 19:45:41.104153  446887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104255  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104258  446887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-768989"
	I1030 19:45:41.104672  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104683  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104694  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104718  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104728  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104730  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.105606  446887 out.go:177] * Verifying Kubernetes components...
	I1030 19:45:41.107136  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:41.121415  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I1030 19:45:41.122053  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.122694  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.122721  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.123073  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.123682  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.123733  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.125497  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1030 19:45:41.125546  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I1030 19:45:41.125878  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.125962  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.126425  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126445  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126465  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126507  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126840  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.126897  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.127362  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.127392  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.127590  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.131397  446887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.131424  446887 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:45:41.131457  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.131834  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.131877  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.143183  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1030 19:45:41.143221  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I1030 19:45:41.143628  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.143765  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.144231  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144249  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144369  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144392  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144657  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144766  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144879  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.144926  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.146739  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.146913  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.148740  446887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:45:41.148794  446887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:45:41.149853  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1030 19:45:41.150250  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.150397  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:45:41.150435  446887 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:45:41.150462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150525  446887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.150545  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:45:41.150562  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150763  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.150781  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.151168  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.152135  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.152184  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.154133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154425  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154625  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.154654  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154811  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.154996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155033  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.155059  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.155145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.155310  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.155345  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155464  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155548  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.168971  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1030 19:45:41.169445  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.169946  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.169969  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.170335  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.170508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.172162  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.172378  446887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.172394  446887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:45:41.172410  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.175214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.175643  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175795  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.175978  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.176133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.176301  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.324093  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:41.381986  446887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:41.439497  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:45:41.439522  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:45:41.448751  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.486707  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:45:41.486736  446887 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:45:41.514478  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.514513  446887 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:45:41.546821  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.590509  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.879189  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879224  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879548  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:41.879597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879608  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.879622  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879632  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879868  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879886  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.889008  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.889024  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.889273  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.889290  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499223  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499621  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499632  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499689  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499969  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499984  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499996  446887 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-768989"
	I1030 19:45:42.598713  446887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008157275s)
	I1030 19:45:42.598770  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.598782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599088  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599109  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.599117  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.599143  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:42.599201  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599447  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599461  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.601840  446887 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1030 19:45:39.963885  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:39.964308  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:39.964346  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:39.964250  448226 retry.go:31] will retry after 4.32150593s: waiting for machine to come up
	I1030 19:45:42.603197  446887 addons.go:510] duration metric: took 1.499214294s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1030 19:45:43.386074  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:45.631177  447486 start.go:364] duration metric: took 3m33.722307877s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:45:45.631272  447486 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:45.631284  447486 fix.go:54] fixHost starting: 
	I1030 19:45:45.631708  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:45.631767  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:45.648654  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1030 19:45:45.649098  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:45.649552  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:45:45.649574  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:45.649848  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:45.650005  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:45:45.650153  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:45:45.651624  447486 fix.go:112] recreateIfNeeded on old-k8s-version-516975: state=Stopped err=<nil>
	I1030 19:45:45.651661  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	W1030 19:45:45.651805  447486 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:45.654065  447486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	I1030 19:45:45.655382  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .Start
	I1030 19:45:45.655554  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:45:45.656134  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:45:45.656518  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:45:45.656885  447486 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:45:45.657501  447486 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:45:44.289530  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289944  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has current primary IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289965  446965 main.go:141] libmachine: (embed-certs-042402) Found IP for machine: 192.168.61.235
	I1030 19:45:44.289978  446965 main.go:141] libmachine: (embed-certs-042402) Reserving static IP address...
	I1030 19:45:44.290419  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.290450  446965 main.go:141] libmachine: (embed-certs-042402) Reserved static IP address: 192.168.61.235
	I1030 19:45:44.290469  446965 main.go:141] libmachine: (embed-certs-042402) DBG | skip adding static IP to network mk-embed-certs-042402 - found existing host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"}
	I1030 19:45:44.290502  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Getting to WaitForSSH function...
	I1030 19:45:44.290519  446965 main.go:141] libmachine: (embed-certs-042402) Waiting for SSH to be available...
	I1030 19:45:44.292418  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292684  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.292727  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292750  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH client type: external
	I1030 19:45:44.292785  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa (-rw-------)
	I1030 19:45:44.292839  446965 main.go:141] libmachine: (embed-certs-042402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:44.292856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | About to run SSH command:
	I1030 19:45:44.292873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | exit 0
	I1030 19:45:44.414810  446965 main.go:141] libmachine: (embed-certs-042402) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:44.415211  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetConfigRaw
	I1030 19:45:44.416039  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.418830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419269  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.419303  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419529  446965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/config.json ...
	I1030 19:45:44.419832  446965 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:44.419859  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:44.420102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.422359  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422704  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.422729  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422878  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.423072  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423217  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423355  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.423493  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.423677  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.423685  446965 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:44.527214  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:44.527248  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527526  446965 buildroot.go:166] provisioning hostname "embed-certs-042402"
	I1030 19:45:44.527562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527793  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.530474  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.530830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.530856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.531041  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.531243  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531432  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531563  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.531736  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.531958  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.531979  446965 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-042402 && echo "embed-certs-042402" | sudo tee /etc/hostname
	I1030 19:45:44.656963  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-042402
	
	I1030 19:45:44.656996  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.659958  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660361  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.660397  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660643  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.660842  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661122  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.661295  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.661469  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.661484  446965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-042402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-042402/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-042402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:44.771688  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:44.771728  446965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:44.771755  446965 buildroot.go:174] setting up certificates
	I1030 19:45:44.771766  446965 provision.go:84] configureAuth start
	I1030 19:45:44.771780  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.772120  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.774838  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775271  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.775298  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775424  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.777432  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777765  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.777793  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777910  446965 provision.go:143] copyHostCerts
	I1030 19:45:44.777990  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:44.778006  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:44.778057  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:44.778147  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:44.778155  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:44.778174  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:44.778229  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:44.778237  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:44.778253  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:44.778360  446965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.embed-certs-042402 san=[127.0.0.1 192.168.61.235 embed-certs-042402 localhost minikube]
	I1030 19:45:45.019172  446965 provision.go:177] copyRemoteCerts
	I1030 19:45:45.019234  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:45.019265  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.022052  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022402  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.022435  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022590  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.022788  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.022969  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.023123  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.104733  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:45.128256  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:45:45.150758  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:45:45.173233  446965 provision.go:87] duration metric: took 401.450922ms to configureAuth
	I1030 19:45:45.173268  446965 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:45.173465  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:45.173562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.176259  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.176698  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176826  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.177025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177190  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177364  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.177554  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.177724  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.177737  446965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:45.396562  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:45.396593  446965 machine.go:96] duration metric: took 976.740759ms to provisionDockerMachine
	I1030 19:45:45.396606  446965 start.go:293] postStartSetup for "embed-certs-042402" (driver="kvm2")
	I1030 19:45:45.396616  446965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:45.396644  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.397007  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:45.397048  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.399581  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.399930  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.399955  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.400045  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.400219  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.400373  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.400483  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.481722  446965 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:45.487207  446965 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:45.487231  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:45.487304  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:45.487398  446965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:45.487516  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:45.500340  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:45.524930  446965 start.go:296] duration metric: took 128.310254ms for postStartSetup
	I1030 19:45:45.524972  446965 fix.go:56] duration metric: took 19.709339085s for fixHost
	I1030 19:45:45.524993  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.527426  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527751  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.527775  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.528145  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528326  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528450  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.528591  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.528804  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.528815  446965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:45.630961  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317545.604586107
	
	I1030 19:45:45.630997  446965 fix.go:216] guest clock: 1730317545.604586107
	I1030 19:45:45.631020  446965 fix.go:229] Guest: 2024-10-30 19:45:45.604586107 +0000 UTC Remote: 2024-10-30 19:45:45.524975841 +0000 UTC m=+302.540999350 (delta=79.610266ms)
	I1030 19:45:45.631054  446965 fix.go:200] guest clock delta is within tolerance: 79.610266ms
	I1030 19:45:45.631062  446965 start.go:83] releasing machines lock for "embed-certs-042402", held for 19.81546348s
	I1030 19:45:45.631109  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.631396  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:45.634114  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634524  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.634558  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634739  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635353  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635646  446965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:45.635692  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.635746  446965 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:45.635775  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.638260  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638639  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.638694  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638718  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639108  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.639128  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.639160  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639260  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639371  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639440  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639509  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.639581  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639723  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.747515  446965 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:45.754851  446965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:45.904471  446965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:45.911348  446965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:45.911428  446965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:45.928273  446965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:45.928299  446965 start.go:495] detecting cgroup driver to use...
	I1030 19:45:45.928381  446965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:45.949100  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:45.963284  446965 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:45.963362  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:45.976952  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:45.991367  446965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:46.104670  446965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:46.254049  446965 docker.go:233] disabling docker service ...
	I1030 19:45:46.254130  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:46.273226  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:46.290211  446965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:46.491658  446965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:46.637447  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:46.654517  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:46.679786  446965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:46.679879  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.695487  446965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:46.695570  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.708974  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.724847  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.736912  446965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:46.749015  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.761190  446965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.780198  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.790865  446965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:46.800950  446965 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:46.801029  446965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:46.814792  446965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:46.825490  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:46.952367  446965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:47.054874  446965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:47.054962  446965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:47.061036  446965 start.go:563] Will wait 60s for crictl version
	I1030 19:45:47.061105  446965 ssh_runner.go:195] Run: which crictl
	I1030 19:45:47.064917  446965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:47.101690  446965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:47.101796  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.131286  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.166314  446965 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:47.167861  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:47.171097  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171438  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:47.171466  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171737  446965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:47.177796  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:47.191930  446965 kubeadm.go:883] updating cluster {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:47.192090  446965 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:47.192149  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:47.231586  446965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:47.231672  446965 ssh_runner.go:195] Run: which lz4
	I1030 19:45:47.236190  446965 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:47.240803  446965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:47.240888  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:45.386683  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:47.386771  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:48.387313  446887 node_ready.go:49] node "default-k8s-diff-port-768989" has status "Ready":"True"
	I1030 19:45:48.387344  446887 node_ready.go:38] duration metric: took 7.005318984s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:48.387359  446887 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:48.395198  446887 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401276  446887 pod_ready.go:93] pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:48.401306  446887 pod_ready.go:82] duration metric: took 6.071305ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401321  446887 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:47.003397  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:45:47.004281  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.004710  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.004787  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.004695  448432 retry.go:31] will retry after 234.659459ms: waiting for machine to come up
	I1030 19:45:47.241308  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.241838  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.241863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.241802  448432 retry.go:31] will retry after 350.804975ms: waiting for machine to come up
	I1030 19:45:47.594533  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.595106  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.595139  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.595044  448432 retry.go:31] will retry after 448.637889ms: waiting for machine to come up
	I1030 19:45:48.045858  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.046358  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.046386  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.046315  448432 retry.go:31] will retry after 543.947609ms: waiting for machine to come up
	I1030 19:45:48.592474  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.592908  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.592937  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.592875  448432 retry.go:31] will retry after 744.106735ms: waiting for machine to come up
	I1030 19:45:49.338345  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:49.338833  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:49.338857  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:49.338795  448432 retry.go:31] will retry after 927.743369ms: waiting for machine to come up
	I1030 19:45:50.267844  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:50.268359  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:50.268390  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:50.268324  448432 retry.go:31] will retry after 829.540351ms: waiting for machine to come up
	I1030 19:45:51.099379  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:51.099863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:51.099893  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:51.099820  448432 retry.go:31] will retry after 898.768304ms: waiting for machine to come up
	I1030 19:45:48.672337  446965 crio.go:462] duration metric: took 1.436158626s to copy over tarball
	I1030 19:45:48.672439  446965 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:50.859055  446965 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186572123s)
	I1030 19:45:50.859101  446965 crio.go:469] duration metric: took 2.186725028s to extract the tarball
	I1030 19:45:50.859113  446965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:50.896570  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:50.946526  446965 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:50.946558  446965 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:50.946567  446965 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.31.2 crio true true} ...
	I1030 19:45:50.946668  446965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-042402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:50.946748  446965 ssh_runner.go:195] Run: crio config
	I1030 19:45:50.992305  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:50.992337  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:50.992348  446965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:50.992374  446965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-042402 NodeName:embed-certs-042402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:50.992530  446965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-042402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:50.992616  446965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:51.002586  446965 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:51.002668  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:51.012058  446965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1030 19:45:51.028645  446965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:51.044912  446965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1030 19:45:51.060991  446965 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:51.064808  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:51.076790  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:51.205861  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:51.224763  446965 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402 for IP: 192.168.61.235
	I1030 19:45:51.224791  446965 certs.go:194] generating shared ca certs ...
	I1030 19:45:51.224812  446965 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:51.224986  446965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:51.225046  446965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:51.225059  446965 certs.go:256] generating profile certs ...
	I1030 19:45:51.225175  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/client.key
	I1030 19:45:51.225256  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key.f6f7691e
	I1030 19:45:51.225314  446965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key
	I1030 19:45:51.225469  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:51.225518  446965 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:51.225540  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:51.225574  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:51.225612  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:51.225651  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:51.225714  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:51.226718  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:51.278345  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:51.308707  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:51.349986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:51.382176  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1030 19:45:51.426538  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 19:45:51.457131  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:51.481165  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:45:51.505285  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:51.533986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:51.562660  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:51.586002  446965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:51.602544  446965 ssh_runner.go:195] Run: openssl version
	I1030 19:45:51.608479  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:51.620650  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625243  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625294  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.631138  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:51.643167  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:51.655128  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659528  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659600  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.665370  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:51.676314  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:51.687386  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692170  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692228  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.697897  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:51.709561  446965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:51.715357  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:51.723291  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:51.731362  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:51.739724  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:51.747383  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:51.753472  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:51.759462  446965 kubeadm.go:392] StartCluster: {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:51.759605  446965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:51.759702  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.806863  446965 cri.go:89] found id: ""
	I1030 19:45:51.806956  446965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:51.818195  446965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:51.818218  446965 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:51.818274  446965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:51.828762  446965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:51.830149  446965 kubeconfig.go:125] found "embed-certs-042402" server: "https://192.168.61.235:8443"
	I1030 19:45:51.832269  446965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:51.842769  446965 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.235
	I1030 19:45:51.842808  446965 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:51.842823  446965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:51.842889  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.887128  446965 cri.go:89] found id: ""
	I1030 19:45:51.887209  446965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:51.911918  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:51.922685  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:51.922714  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:51.922770  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:45:51.935548  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:51.935620  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:51.948635  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:45:51.961647  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:51.961745  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:51.975880  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:45:51.986852  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:51.986922  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:52.001290  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:45:52.015249  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:52.015333  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:52.026657  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:52.038560  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:52.167697  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:50.408274  446887 pod_ready.go:103] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:51.407818  446887 pod_ready.go:93] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.407850  446887 pod_ready.go:82] duration metric: took 3.006520689s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.407865  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413452  446887 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.413481  446887 pod_ready.go:82] duration metric: took 5.607077ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413495  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:52.000678  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:52.001196  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:52.001235  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:52.001148  448432 retry.go:31] will retry after 1.750749509s: waiting for machine to come up
	I1030 19:45:53.753607  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:53.754013  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:53.754038  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:53.753950  448432 retry.go:31] will retry after 1.537350682s: waiting for machine to come up
	I1030 19:45:55.293910  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:55.294396  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:55.294427  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:55.294336  448432 retry.go:31] will retry after 2.151521323s: waiting for machine to come up
	I1030 19:45:53.477258  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.309509141s)
	I1030 19:45:53.477309  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.696850  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.768419  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.863913  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:53.864018  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.364235  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.864820  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.887333  446965 api_server.go:72] duration metric: took 1.023419155s to wait for apiserver process to appear ...
	I1030 19:45:54.887363  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:54.887399  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:54.887929  446965 api_server.go:269] stopped: https://192.168.61.235:8443/healthz: Get "https://192.168.61.235:8443/healthz": dial tcp 192.168.61.235:8443: connect: connection refused
	I1030 19:45:55.388396  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.610916  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:57.610951  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:57.610972  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.745722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.745782  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:57.887887  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.895296  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.895352  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:54.167893  446887 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:54.920921  446887 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.920954  446887 pod_ready.go:82] duration metric: took 3.507449937s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.920974  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927123  446887 pod_ready.go:93] pod "kube-proxy-tsr5q" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.927150  446887 pod_ready.go:82] duration metric: took 6.167749ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927164  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932513  446887 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.932540  446887 pod_ready.go:82] duration metric: took 5.367579ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932557  446887 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:56.939174  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.388076  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.393192  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:58.393235  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:58.887710  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.891923  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:45:58.897783  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:58.897816  446965 api_server.go:131] duration metric: took 4.010443495s to wait for apiserver health ...
	I1030 19:45:58.897836  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:58.897844  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:58.899669  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:57.447894  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:57.448365  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:57.448392  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:57.448320  448432 retry.go:31] will retry after 2.439938206s: waiting for machine to come up
	I1030 19:45:59.889685  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:59.890166  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:59.890205  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:59.890113  448432 retry.go:31] will retry after 3.836080386s: waiting for machine to come up
	I1030 19:45:58.901122  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:58.924765  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:58.946342  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:58.956378  446965 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:58.956412  446965 system_pods.go:61] "coredns-7c65d6cfc9-tv6kc" [d752975e-e126-4d22-9b35-b9f57d1170b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:58.956419  446965 system_pods.go:61] "etcd-embed-certs-042402" [fa9b90f6-82b2-448a-ad86-9cbba45a4c2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:58.956427  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [48af3136-74d9-4062-bb9a-e48dafd311a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:58.956436  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [0ae60724-6634-464a-af2f-e08148fb3eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:58.956445  446965 system_pods.go:61] "kube-proxy-qwjr9" [309ee447-8d52-49e7-a805-2b7c0af2a3bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 19:45:58.956450  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [f82ff11e-8305-4d05-b370-fd89693e5ad1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:58.956454  446965 system_pods.go:61] "metrics-server-6867b74b74-4x9t6" [1160789d-9462-4d1d-9f84-5ded8394bd4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:58.956459  446965 system_pods.go:61] "storage-provisioner" [d1559440-b14a-4c2a-a52e-ba39afb01f94] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 19:45:58.956465  446965 system_pods.go:74] duration metric: took 10.103898ms to wait for pod list to return data ...
	I1030 19:45:58.956473  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:58.960150  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:58.960182  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:58.960195  446965 node_conditions.go:105] duration metric: took 3.712942ms to run NodePressure ...
	I1030 19:45:58.960219  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:59.284558  446965 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289073  446965 kubeadm.go:739] kubelet initialised
	I1030 19:45:59.289095  446965 kubeadm.go:740] duration metric: took 4.508144ms waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289104  446965 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:59.293538  446965 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:01.298780  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.940597  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:01.439118  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.011617  446736 start.go:364] duration metric: took 52.494265895s to acquireMachinesLock for "no-preload-960512"
	I1030 19:46:05.011674  446736 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:46:05.011683  446736 fix.go:54] fixHost starting: 
	I1030 19:46:05.012022  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:05.012087  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:05.029067  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I1030 19:46:05.029484  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:05.030010  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:05.030039  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:05.030461  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:05.030690  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:05.030854  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:05.032380  446736 fix.go:112] recreateIfNeeded on no-preload-960512: state=Stopped err=<nil>
	I1030 19:46:05.032408  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	W1030 19:46:05.032566  446736 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:46:05.035693  446736 out.go:177] * Restarting existing kvm2 VM for "no-preload-960512" ...
	I1030 19:46:03.727617  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728028  447486 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:46:03.728046  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:46:03.728062  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728565  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:46:03.728600  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:46:03.728616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.728639  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | skip adding static IP to network mk-old-k8s-version-516975 - found existing host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"}
	I1030 19:46:03.728657  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:46:03.730754  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731085  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.731121  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731145  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:46:03.731212  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:46:03.731252  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:03.731275  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:46:03.731289  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:46:03.862423  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:03.862832  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:46:03.863519  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:03.865977  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866262  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.866297  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866512  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:46:03.866755  447486 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:03.866783  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:03.866994  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.869079  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869384  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.869410  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869603  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.869787  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.869949  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.870102  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.870243  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.870468  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.870481  447486 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:03.982986  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:03.983018  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983285  447486 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:46:03.983319  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983502  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.986203  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986576  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.986615  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986765  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.986983  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987126  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987258  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.987419  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.987696  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.987719  447486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:46:04.112692  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:46:04.112719  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.115948  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116283  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.116309  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116482  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.116667  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116842  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116966  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.117104  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.117275  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.117290  447486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:04.235988  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:04.236032  447486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:04.236098  447486 buildroot.go:174] setting up certificates
	I1030 19:46:04.236111  447486 provision.go:84] configureAuth start
	I1030 19:46:04.236124  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:04.236500  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:04.239328  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.239707  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.239739  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.240009  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.242118  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242440  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.242505  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242683  447486 provision.go:143] copyHostCerts
	I1030 19:46:04.242766  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:04.242787  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:04.242847  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:04.242972  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:04.242986  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:04.243011  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:04.243072  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:04.243079  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:04.243095  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:04.243153  447486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:46:04.355003  447486 provision.go:177] copyRemoteCerts
	I1030 19:46:04.355061  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:04.355092  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.357788  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358153  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.358191  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358397  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.358630  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.358809  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.358970  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.446614  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:04.473708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:46:04.497721  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:46:04.521806  447486 provision.go:87] duration metric: took 285.682041ms to configureAuth
	I1030 19:46:04.521836  447486 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:04.521999  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:46:04.522072  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.524616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525034  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.525065  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525282  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.525452  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525616  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.525916  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.526129  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.526145  447486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:04.766663  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:04.766697  447486 machine.go:96] duration metric: took 899.924211ms to provisionDockerMachine
	I1030 19:46:04.766709  447486 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:46:04.766720  447486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:04.766745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:04.767081  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:04.767114  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.769995  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770401  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.770428  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770580  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.770762  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.770973  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.771132  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.858006  447486 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:04.862295  447486 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:04.862324  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:04.862387  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:04.862475  447486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:04.862612  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:04.872541  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:04.896306  447486 start.go:296] duration metric: took 129.577956ms for postStartSetup
	I1030 19:46:04.896360  447486 fix.go:56] duration metric: took 19.265077419s for fixHost
	I1030 19:46:04.896383  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.899009  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899397  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.899429  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899538  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.899739  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.899906  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.900101  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.900271  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.900510  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.900525  447486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:05.011439  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317564.967936408
	
	I1030 19:46:05.011464  447486 fix.go:216] guest clock: 1730317564.967936408
	I1030 19:46:05.011472  447486 fix.go:229] Guest: 2024-10-30 19:46:04.967936408 +0000 UTC Remote: 2024-10-30 19:46:04.896364572 +0000 UTC m=+233.135558535 (delta=71.571836ms)
	I1030 19:46:05.011516  447486 fix.go:200] guest clock delta is within tolerance: 71.571836ms
	I1030 19:46:05.011525  447486 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 19.380292064s
	I1030 19:46:05.011552  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.011853  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:05.014722  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015072  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.015100  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015225  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.015808  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016002  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016107  447486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:05.016155  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.016265  447486 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:05.016296  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.018976  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019189  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019326  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019370  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019517  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019604  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019632  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019708  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.019830  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019918  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.019995  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.020077  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.020157  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.020295  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.100852  447486 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:05.127673  447486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:05.279889  447486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:05.285900  447486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:05.285976  447486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:05.304763  447486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:05.304791  447486 start.go:495] detecting cgroup driver to use...
	I1030 19:46:05.304862  447486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:05.325729  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:05.343047  447486 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:05.343128  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:05.358748  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:05.374769  447486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:05.492589  447486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:05.639943  447486 docker.go:233] disabling docker service ...
	I1030 19:46:05.640039  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:05.655449  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:05.669688  447486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:05.814658  447486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:05.957944  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:05.972122  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:05.990577  447486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:46:05.990653  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.000834  447486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:06.000907  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.011678  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.022051  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.032515  447486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:06.043296  447486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:06.053123  447486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:06.053170  447486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:06.067625  447486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:06.081306  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:06.221181  447486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:06.321848  447486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:06.321926  447486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:06.329697  447486 start.go:563] Will wait 60s for crictl version
	I1030 19:46:06.329757  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:06.333980  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:06.381198  447486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:06.381290  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.410365  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.442329  447486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:46:06.443471  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:06.446233  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446621  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:06.446653  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446822  447486 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:06.451216  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:06.464477  447486 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:06.464607  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:46:06.464668  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:06.513123  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:06.513205  447486 ssh_runner.go:195] Run: which lz4
	I1030 19:46:06.517252  447486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:46:06.521358  447486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:46:06.521384  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:46:03.300213  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.301139  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.303015  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:03.939240  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.940212  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.942062  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.037179  446736 main.go:141] libmachine: (no-preload-960512) Calling .Start
	I1030 19:46:05.037388  446736 main.go:141] libmachine: (no-preload-960512) Ensuring networks are active...
	I1030 19:46:05.038384  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network default is active
	I1030 19:46:05.038793  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network mk-no-preload-960512 is active
	I1030 19:46:05.039208  446736 main.go:141] libmachine: (no-preload-960512) Getting domain xml...
	I1030 19:46:05.040083  446736 main.go:141] libmachine: (no-preload-960512) Creating domain...
	I1030 19:46:06.366674  446736 main.go:141] libmachine: (no-preload-960512) Waiting to get IP...
	I1030 19:46:06.367568  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.368016  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.368083  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.367984  448568 retry.go:31] will retry after 216.900908ms: waiting for machine to come up
	I1030 19:46:06.586638  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.587182  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.587213  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.587121  448568 retry.go:31] will retry after 319.082011ms: waiting for machine to come up
	I1030 19:46:06.907974  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.908650  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.908683  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.908581  448568 retry.go:31] will retry after 418.339306ms: waiting for machine to come up
	I1030 19:46:07.328241  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.329035  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.329065  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.328988  448568 retry.go:31] will retry after 523.624135ms: waiting for machine to come up
	I1030 19:46:07.855234  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.855944  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.855970  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.855849  448568 retry.go:31] will retry after 556.06146ms: waiting for machine to come up
	I1030 19:46:08.413474  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:08.414059  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:08.414098  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:08.413947  448568 retry.go:31] will retry after 713.043389ms: waiting for machine to come up
	I1030 19:46:09.128274  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:09.128737  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:09.128762  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:09.128689  448568 retry.go:31] will retry after 1.096111238s: waiting for machine to come up
	I1030 19:46:08.144772  447486 crio.go:462] duration metric: took 1.627547543s to copy over tarball
	I1030 19:46:08.144845  447486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:46:11.104192  447486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959302647s)
	I1030 19:46:11.104228  447486 crio.go:469] duration metric: took 2.959426051s to extract the tarball
	I1030 19:46:11.104240  447486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:46:11.146584  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:11.183766  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:11.183797  447486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:11.183889  447486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.183917  447486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.183932  447486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.183968  447486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.184087  447486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.183972  447486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:46:11.183969  447486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.183928  447486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.185976  447486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.186001  447486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:46:11.186043  447486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.186053  447486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.186046  447486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.185977  447486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.186108  447486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.186150  447486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.348134  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391191  447486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:46:11.391327  447486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391399  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.396693  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.400062  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.406656  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:46:11.410534  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.410590  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.441896  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.460400  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.482465  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.554431  447486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:46:11.554480  447486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.554549  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.610376  447486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:46:11.610424  447486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:46:11.610471  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616060  447486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:46:11.616104  447486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.616153  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616177  447486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:46:11.616217  447486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.616282  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.617473  447486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:46:11.617502  447486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.617535  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652124  447486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:46:11.652185  447486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.652228  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.652233  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652237  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.652331  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.652376  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.652433  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.652483  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.798844  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.798859  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:46:11.798873  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.798949  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.799075  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.799179  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.799182  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:08.303450  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.303482  446965 pod_ready.go:82] duration metric: took 9.009918893s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.303498  446965 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312186  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.312213  446965 pod_ready.go:82] duration metric: took 8.706192ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312228  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:10.320161  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.439107  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:12.439663  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.226842  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:10.227315  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:10.227346  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:10.227261  448568 retry.go:31] will retry after 1.165335625s: waiting for machine to come up
	I1030 19:46:11.394231  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:11.394817  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:11.394851  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:11.394763  448568 retry.go:31] will retry after 1.292571083s: waiting for machine to come up
	I1030 19:46:12.688486  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:12.688919  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:12.688965  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:12.688862  448568 retry.go:31] will retry after 1.97645889s: waiting for machine to come up
	I1030 19:46:14.667783  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:14.668245  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:14.668278  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:14.668200  448568 retry.go:31] will retry after 2.020488863s: waiting for machine to come up
	I1030 19:46:11.942258  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.942265  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.942365  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.942352  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.942421  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.946933  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:12.064951  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:46:12.067930  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:12.067990  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:46:12.068057  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:46:12.068078  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:46:12.083122  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:46:12.107265  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:46:13.402970  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:13.551979  447486 cache_images.go:92] duration metric: took 2.368158873s to LoadCachedImages
	W1030 19:46:13.552080  447486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:46:13.552096  447486 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:46:13.552211  447486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:13.552276  447486 ssh_runner.go:195] Run: crio config
	I1030 19:46:13.605982  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:46:13.606008  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:13.606020  447486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:13.606049  447486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:46:13.606223  447486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:13.606299  447486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:46:13.616954  447486 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:13.617034  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:13.627440  447486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:46:13.644821  447486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:13.662070  447486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:46:13.679198  447486 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:13.682992  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:13.697879  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:13.819975  447486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:13.838669  447486 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:46:13.838695  447486 certs.go:194] generating shared ca certs ...
	I1030 19:46:13.838716  447486 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:13.838888  447486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:13.838946  447486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:13.838962  447486 certs.go:256] generating profile certs ...
	I1030 19:46:13.839064  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:46:13.839149  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:46:13.839208  447486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:46:13.839375  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:13.839429  447486 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:13.839442  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:13.839476  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:13.839509  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:13.839545  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:13.839609  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:13.840381  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:13.868947  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:13.923848  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:13.973167  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:14.009333  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:46:14.042397  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:14.073927  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:14.109209  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:46:14.135708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:14.162145  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:14.186176  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:14.210362  447486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:14.228727  447486 ssh_runner.go:195] Run: openssl version
	I1030 19:46:14.234436  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:14.245497  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250026  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250077  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.255727  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:14.266674  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:14.277813  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282378  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282435  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.288338  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:14.300057  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:14.312295  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317488  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317555  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.323518  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:14.335182  447486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:14.339998  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:14.346145  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:14.352474  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:14.358687  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:14.364275  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:14.370038  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:14.376051  447486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:14.376144  447486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:14.376187  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.423395  447486 cri.go:89] found id: ""
	I1030 19:46:14.423477  447486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:14.435404  447486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:14.435485  447486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:14.435558  447486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:14.448035  447486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:14.448911  447486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:14.449557  447486 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-516975" cluster setting kubeconfig missing "old-k8s-version-516975" context setting]
	I1030 19:46:14.450419  447486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:14.452252  447486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:14.462634  447486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I1030 19:46:14.462676  447486 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:14.462693  447486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:14.462750  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.508286  447486 cri.go:89] found id: ""
	I1030 19:46:14.508380  447486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:14.527996  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:14.539011  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:14.539037  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:14.539094  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:14.550159  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:14.550243  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:14.561350  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:14.571353  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:14.571430  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:14.584480  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.598307  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:14.598400  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.611632  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:14.621644  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:14.621705  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:14.632161  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:14.642295  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:14.783130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.694839  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.923329  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.052124  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.143607  447486 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:16.143710  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:16.643943  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:13.245727  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:13.702440  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.702472  446965 pod_ready.go:82] duration metric: took 5.390235543s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.702497  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948519  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.948549  446965 pod_ready.go:82] duration metric: took 246.042214ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948565  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958077  446965 pod_ready.go:93] pod "kube-proxy-qwjr9" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.958108  446965 pod_ready.go:82] duration metric: took 9.534813ms for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958122  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974906  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.974931  446965 pod_ready.go:82] duration metric: took 16.800547ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974944  446965 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:15.982433  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:17.983261  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:14.440176  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.939769  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.690435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:16.690908  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:16.690997  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:16.690904  448568 retry.go:31] will retry after 2.729556206s: waiting for machine to come up
	I1030 19:46:19.423740  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:19.424246  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:19.424271  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:19.424195  448568 retry.go:31] will retry after 2.822049517s: waiting for machine to come up
	I1030 19:46:17.144678  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.644772  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.144037  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.644437  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.144273  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.643801  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.144200  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.644764  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.143898  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.643960  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.481213  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.981619  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:19.438946  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:21.938706  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.247395  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:22.247840  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:22.247869  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:22.247813  448568 retry.go:31] will retry after 5.243633747s: waiting for machine to come up
	I1030 19:46:22.144625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.644446  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.144207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.644001  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.143787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.644166  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.144397  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.644654  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.144214  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.644275  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.482032  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.981111  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:23.940402  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:26.439369  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.494630  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495107  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has current primary IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495146  446736 main.go:141] libmachine: (no-preload-960512) Found IP for machine: 192.168.72.132
	I1030 19:46:27.495159  446736 main.go:141] libmachine: (no-preload-960512) Reserving static IP address...
	I1030 19:46:27.495588  446736 main.go:141] libmachine: (no-preload-960512) Reserved static IP address: 192.168.72.132
	I1030 19:46:27.495612  446736 main.go:141] libmachine: (no-preload-960512) Waiting for SSH to be available...
	I1030 19:46:27.495634  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.495664  446736 main.go:141] libmachine: (no-preload-960512) DBG | skip adding static IP to network mk-no-preload-960512 - found existing host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"}
	I1030 19:46:27.495678  446736 main.go:141] libmachine: (no-preload-960512) DBG | Getting to WaitForSSH function...
	I1030 19:46:27.497679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498051  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.498083  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498231  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH client type: external
	I1030 19:46:27.498273  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa (-rw-------)
	I1030 19:46:27.498316  446736 main.go:141] libmachine: (no-preload-960512) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:27.498344  446736 main.go:141] libmachine: (no-preload-960512) DBG | About to run SSH command:
	I1030 19:46:27.498355  446736 main.go:141] libmachine: (no-preload-960512) DBG | exit 0
	I1030 19:46:27.626476  446736 main.go:141] libmachine: (no-preload-960512) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:27.626850  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetConfigRaw
	I1030 19:46:27.627519  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:27.629913  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630288  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.630326  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630561  446736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/config.json ...
	I1030 19:46:27.630778  446736 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:27.630801  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:27.631021  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.633457  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.633849  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.633880  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.634032  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.634200  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634393  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.634741  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.634940  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.634952  446736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:27.743135  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:27.743167  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743475  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:46:27.743516  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743717  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.746369  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746726  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.746758  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746928  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.747114  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747266  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747380  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.747509  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.747740  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.747759  446736 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-960512 && echo "no-preload-960512" | sudo tee /etc/hostname
	I1030 19:46:27.872871  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-960512
	
	I1030 19:46:27.872899  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.875533  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.875867  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.875908  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.876072  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.876274  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876546  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876690  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.876851  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.877082  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.877099  446736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-960512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-960512/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-960512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:27.999551  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:27.999617  446736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:27.999654  446736 buildroot.go:174] setting up certificates
	I1030 19:46:27.999667  446736 provision.go:84] configureAuth start
	I1030 19:46:27.999689  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.999998  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.002874  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003285  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.003317  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003474  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.005987  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006376  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.006418  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006545  446736 provision.go:143] copyHostCerts
	I1030 19:46:28.006620  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:28.006639  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:28.006707  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:28.006846  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:28.006859  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:28.006898  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:28.006983  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:28.006993  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:28.007023  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:28.007102  446736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.no-preload-960512 san=[127.0.0.1 192.168.72.132 localhost minikube no-preload-960512]
	I1030 19:46:28.317424  446736 provision.go:177] copyRemoteCerts
	I1030 19:46:28.317502  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:28.317537  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.320089  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320387  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.320419  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.320776  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.320963  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.321116  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.409344  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:46:28.434874  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:28.459903  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:46:28.486949  446736 provision.go:87] duration metric: took 487.261556ms to configureAuth
	I1030 19:46:28.486981  446736 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:28.487219  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:28.487322  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.489873  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490180  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.490223  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490349  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.490561  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490719  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490827  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.491003  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.491199  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.491216  446736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:28.727045  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:28.727081  446736 machine.go:96] duration metric: took 1.096287528s to provisionDockerMachine
	I1030 19:46:28.727095  446736 start.go:293] postStartSetup for "no-preload-960512" (driver="kvm2")
	I1030 19:46:28.727106  446736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:28.727125  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.727460  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:28.727490  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.730071  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730445  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.730479  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730652  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.730858  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.731010  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.731197  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.817529  446736 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:28.822263  446736 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:28.822299  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:28.822394  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:28.822517  446736 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:28.822647  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:28.832488  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:28.858165  446736 start.go:296] duration metric: took 131.055053ms for postStartSetup
	I1030 19:46:28.858211  446736 fix.go:56] duration metric: took 23.84652817s for fixHost
	I1030 19:46:28.858235  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.861136  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861480  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.861513  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861819  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.862059  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862224  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862373  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.862582  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.862786  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.862797  446736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:28.975448  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317588.951806388
	
	I1030 19:46:28.975479  446736 fix.go:216] guest clock: 1730317588.951806388
	I1030 19:46:28.975489  446736 fix.go:229] Guest: 2024-10-30 19:46:28.951806388 +0000 UTC Remote: 2024-10-30 19:46:28.858215114 +0000 UTC m=+358.930371017 (delta=93.591274ms)
	I1030 19:46:28.975521  446736 fix.go:200] guest clock delta is within tolerance: 93.591274ms
	I1030 19:46:28.975529  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 23.963879546s
	I1030 19:46:28.975555  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.975849  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.978813  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979310  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.979341  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979608  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980197  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980429  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980522  446736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:28.980567  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.980682  446736 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:28.980710  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.984058  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984208  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984410  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984582  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984613  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984636  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984782  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.984798  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984966  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.984974  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.985121  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.985187  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.985260  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:29.063734  446736 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:29.087821  446736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:29.236289  446736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:29.242997  446736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:29.243088  446736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:29.260802  446736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:29.260836  446736 start.go:495] detecting cgroup driver to use...
	I1030 19:46:29.260930  446736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:29.279572  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:29.293359  446736 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:29.293423  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:29.306417  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:29.319617  446736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:29.440023  446736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:29.585541  446736 docker.go:233] disabling docker service ...
	I1030 19:46:29.585630  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:29.600459  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:29.613611  446736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:29.752666  446736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:29.880152  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:29.893912  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:29.913099  446736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:46:29.913160  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.923800  446736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:29.923882  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.934880  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.946088  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.956644  446736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:29.967199  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.978863  446736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.996225  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:30.006604  446736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:30.015954  446736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:30.016017  446736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:30.029194  446736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:30.041316  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:30.161438  446736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:30.257137  446736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:30.257209  446736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:30.261981  446736 start.go:563] Will wait 60s for crictl version
	I1030 19:46:30.262052  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.266275  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:30.305128  446736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:30.305228  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.335445  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.367026  446736 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:46:27.143768  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.644294  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.143819  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.643783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.144405  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.643941  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.644787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.143873  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.643857  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.982162  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:32.480878  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:28.939126  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.939780  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.368355  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:30.371260  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371651  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:30.371679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371922  446736 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:30.376282  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:30.389078  446736 kubeadm.go:883] updating cluster {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:30.389193  446736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:46:30.389228  446736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:30.423375  446736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:46:30.423402  446736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:30.423508  446736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.423562  446736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.423578  446736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.423595  446736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.423536  446736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.423634  446736 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424979  446736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.424988  446736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.424996  446736 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424987  446736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.425021  446736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.425036  446736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.425029  446736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.425061  446736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.612665  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.618602  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1030 19:46:30.636563  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.680808  446736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1030 19:46:30.680858  446736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.680911  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.749318  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.750405  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.751514  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.752746  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.768614  446736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1030 19:46:30.768663  446736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.768714  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.768723  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.881778  446736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1030 19:46:30.881811  446736 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1030 19:46:30.881821  446736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.881844  446736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.881862  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.881883  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.884827  446736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1030 19:46:30.884861  446736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.884901  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891812  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.891882  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.891907  446736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1030 19:46:30.891940  446736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.891981  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891986  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.892142  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.893781  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.992346  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.992372  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.992404  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.995602  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.995730  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.995786  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.123892  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1030 19:46:31.123996  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:31.124018  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.132177  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.132209  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:31.132311  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:31.132335  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.220011  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1030 19:46:31.220043  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220100  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220224  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1030 19:46:31.220329  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:31.262583  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1030 19:46:31.262685  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.262698  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:31.269015  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1030 19:46:31.269117  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:31.269710  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1030 19:46:31.269793  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:32.667341  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.216743  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.99661544s)
	I1030 19:46:33.216787  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1030 19:46:33.216787  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.996433716s)
	I1030 19:46:33.216820  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1030 19:46:33.216829  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216840  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.95412356s)
	I1030 19:46:33.216872  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1030 19:46:33.216884  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216925  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2: (1.954216284s)
	I1030 19:46:33.216964  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1030 19:46:33.216989  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.947854262s)
	I1030 19:46:33.217014  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1030 19:46:33.217027  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.947220506s)
	I1030 19:46:33.217040  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1030 19:46:33.217059  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:33.217140  446736 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1030 19:46:33.217178  446736 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.217222  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:32.144229  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.644079  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.643950  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.143888  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.643861  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.144210  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.644677  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.644549  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.481488  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:36.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:33.438659  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:37.440028  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.577178  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.360267806s)
	I1030 19:46:35.577219  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1030 19:46:35.577227  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.360144583s)
	I1030 19:46:35.577243  446736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.577252  446736 ssh_runner.go:235] Completed: which crictl: (2.360017291s)
	I1030 19:46:35.577259  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1030 19:46:35.577305  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:35.577309  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.615490  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492071  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.914649003s)
	I1030 19:46:39.492116  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1030 19:46:39.492142  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.876615301s)
	I1030 19:46:39.492211  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492148  446736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.492295  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.535258  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 19:46:39.535417  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:37.144681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.643833  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.143783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.644359  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.144745  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.644625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.144535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.643881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.144754  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.644070  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.302627  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.480981  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:39.940272  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:42.439827  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.566095  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.073767908s)
	I1030 19:46:41.566140  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1030 19:46:41.566167  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566169  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.030723752s)
	I1030 19:46:41.566210  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566224  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1030 19:46:43.628473  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.06223599s)
	I1030 19:46:43.628500  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1030 19:46:43.628525  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:43.628570  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:42.144672  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.644533  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.144320  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.644574  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.144465  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.644428  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.143785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.643767  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.144467  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.644496  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.481495  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.481844  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.982318  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:44.940061  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.439131  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.079808  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451207821s)
	I1030 19:46:45.079843  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1030 19:46:45.079870  446736 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:45.079918  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:46.026472  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 19:46:46.026538  446736 cache_images.go:123] Successfully loaded all cached images
	I1030 19:46:46.026547  446736 cache_images.go:92] duration metric: took 15.603128567s to LoadCachedImages
	I1030 19:46:46.026562  446736 kubeadm.go:934] updating node { 192.168.72.132 8443 v1.31.2 crio true true} ...
	I1030 19:46:46.026722  446736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-960512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:46.026819  446736 ssh_runner.go:195] Run: crio config
	I1030 19:46:46.080342  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:46.080367  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:46.080376  446736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:46.080399  446736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-960512 NodeName:no-preload-960512 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:46:46.080574  446736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-960512"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:46.080645  446736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:46:46.091323  446736 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:46.091400  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:46.100320  446736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1030 19:46:46.117369  446736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:46.133667  446736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1030 19:46:46.157251  446736 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:46.161543  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:46.173451  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:46.303532  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:46.321855  446736 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512 for IP: 192.168.72.132
	I1030 19:46:46.321883  446736 certs.go:194] generating shared ca certs ...
	I1030 19:46:46.321905  446736 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:46.322108  446736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:46.322171  446736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:46.322189  446736 certs.go:256] generating profile certs ...
	I1030 19:46:46.322294  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/client.key
	I1030 19:46:46.322381  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key.378d6029
	I1030 19:46:46.322436  446736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key
	I1030 19:46:46.322609  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:46.322649  446736 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:46.322661  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:46.322692  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:46.322727  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:46.322756  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:46.322812  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:46.323679  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:46.362339  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:46.396270  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:46.443482  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:46.468142  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:46:46.507418  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:46.534091  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:46.557105  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:46:46.579880  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:46.602665  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:46.625853  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:46.651685  446736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:46.670898  446736 ssh_runner.go:195] Run: openssl version
	I1030 19:46:46.677083  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:46.688814  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693349  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693399  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.699221  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:46.710200  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:46.721001  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725283  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725343  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.730798  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:46.741915  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:46.752767  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757109  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757150  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.762844  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:46.773796  446736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:46.778156  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:46.784099  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:46.789960  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:46.796056  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:46.801880  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:46.807680  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:46.813574  446736 kubeadm.go:392] StartCluster: {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:46.813694  446736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:46.813735  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.856225  446736 cri.go:89] found id: ""
	I1030 19:46:46.856309  446736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:46.866696  446736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:46.866721  446736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:46.866774  446736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:46.876622  446736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:46.877777  446736 kubeconfig.go:125] found "no-preload-960512" server: "https://192.168.72.132:8443"
	I1030 19:46:46.880116  446736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:46.889710  446736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.132
	I1030 19:46:46.889743  446736 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:46.889761  446736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:46.889837  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.927109  446736 cri.go:89] found id: ""
	I1030 19:46:46.927177  446736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:46.944519  446736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:46.954607  446736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:46.954626  446736 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:46.954669  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:46.963987  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:46.964086  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:46.973787  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:46.983447  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:46.983496  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:46.993101  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.003713  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:47.003773  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.013162  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:47.022411  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:47.022479  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:47.031878  446736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:47.041616  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:47.156846  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.637250  446736 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.480364831s)
	I1030 19:46:48.637284  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.836676  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.908664  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.987298  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:48.987411  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.488330  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.143932  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.644228  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.144124  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.643923  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.144466  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.643968  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.144811  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.643785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.144372  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.644019  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.983127  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.482250  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.939257  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.439840  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.988463  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.024092  446736 api_server.go:72] duration metric: took 1.036791371s to wait for apiserver process to appear ...
	I1030 19:46:50.024127  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:46:50.024155  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:50.024711  446736 api_server.go:269] stopped: https://192.168.72.132:8443/healthz: Get "https://192.168.72.132:8443/healthz": dial tcp 192.168.72.132:8443: connect: connection refused
	I1030 19:46:50.524543  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.757497  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:46:52.757537  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:46:52.757558  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.847598  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:52.847638  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.024885  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.030717  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.030749  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.524384  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.531420  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.531459  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.025006  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.030512  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.030545  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.525157  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.529426  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.529453  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.025276  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.029608  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.029634  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.525041  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.529303  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.529339  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:56.024906  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:56.029520  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:46:56.035579  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:46:56.035609  446736 api_server.go:131] duration metric: took 6.011468992s to wait for apiserver health ...
	I1030 19:46:56.035619  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:56.035625  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:56.037524  446736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:46:52.144732  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.644528  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.144074  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.643889  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.143976  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.644535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.144783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.644114  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.144728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.643846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.038963  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:46:56.050330  446736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:46:56.069509  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:46:56.079237  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:46:56.079268  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:46:56.079275  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:46:56.079283  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:46:56.079288  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:46:56.079294  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:46:56.079299  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:46:56.079304  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:46:56.079307  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:46:56.079313  446736 system_pods.go:74] duration metric: took 9.785027ms to wait for pod list to return data ...
	I1030 19:46:56.079327  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:46:56.082617  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:46:56.082644  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:46:56.082658  446736 node_conditions.go:105] duration metric: took 3.325744ms to run NodePressure ...
	I1030 19:46:56.082680  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:56.353123  446736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357714  446736 kubeadm.go:739] kubelet initialised
	I1030 19:46:56.357740  446736 kubeadm.go:740] duration metric: took 4.581883ms waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357755  446736 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:56.362687  446736 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.367124  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367153  446736 pod_ready.go:82] duration metric: took 4.443081ms for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.367165  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367180  446736 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.371747  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371774  446736 pod_ready.go:82] duration metric: took 4.580967ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.371785  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371794  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.375687  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375704  446736 pod_ready.go:82] duration metric: took 3.901023ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.375712  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375718  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.472995  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473036  446736 pod_ready.go:82] duration metric: took 97.300344ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.473047  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473056  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.873717  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873749  446736 pod_ready.go:82] duration metric: took 400.680615ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.873759  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873765  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.273361  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273392  446736 pod_ready.go:82] duration metric: took 399.61983ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.273405  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273415  446736 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.674201  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674236  446736 pod_ready.go:82] duration metric: took 400.809663ms for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.674251  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674260  446736 pod_ready.go:39] duration metric: took 1.31649331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:57.674285  446736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:46:57.687464  446736 ops.go:34] apiserver oom_adj: -16
	I1030 19:46:57.687489  446736 kubeadm.go:597] duration metric: took 10.820761471s to restartPrimaryControlPlane
	I1030 19:46:57.687498  446736 kubeadm.go:394] duration metric: took 10.873934509s to StartCluster
	I1030 19:46:57.687514  446736 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.687586  446736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:57.689255  446736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.689496  446736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:46:57.689574  446736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:46:57.689683  446736 addons.go:69] Setting storage-provisioner=true in profile "no-preload-960512"
	I1030 19:46:57.689706  446736 addons.go:234] Setting addon storage-provisioner=true in "no-preload-960512"
	I1030 19:46:57.689708  446736 addons.go:69] Setting metrics-server=true in profile "no-preload-960512"
	W1030 19:46:57.689719  446736 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:46:57.689727  446736 addons.go:234] Setting addon metrics-server=true in "no-preload-960512"
	W1030 19:46:57.689737  446736 addons.go:243] addon metrics-server should already be in state true
	I1030 19:46:57.689755  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689791  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:57.689761  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689707  446736 addons.go:69] Setting default-storageclass=true in profile "no-preload-960512"
	I1030 19:46:57.689912  446736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-960512"
	I1030 19:46:57.690245  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690258  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690264  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690297  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690303  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690322  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.691365  446736 out.go:177] * Verifying Kubernetes components...
	I1030 19:46:57.692941  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:57.727794  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1030 19:46:57.727877  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I1030 19:46:57.728127  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1030 19:46:57.728276  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728414  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728517  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728861  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.728879  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729032  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729053  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729056  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729064  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729350  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729429  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729452  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.730008  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730051  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.730124  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730362  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.731104  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.734295  446736 addons.go:234] Setting addon default-storageclass=true in "no-preload-960512"
	W1030 19:46:57.734316  446736 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:46:57.734349  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.734742  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.734810  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.747185  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1030 19:46:57.747680  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.748340  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.748360  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.748795  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.749029  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.749722  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I1030 19:46:57.750318  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.754616  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I1030 19:46:57.754666  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.755024  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.755052  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.755555  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.755672  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757159  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.757166  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.757184  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.757504  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757804  446736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:57.758045  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.758089  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.759001  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.759300  446736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:57.759313  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:46:57.759327  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.762134  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762557  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.762582  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762740  446736 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:46:54.485910  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.981415  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:54.939168  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.940263  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:57.762828  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.763037  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.763192  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.763344  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.763936  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:46:57.763953  446736 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:46:57.763970  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.766410  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.766771  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.766795  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.767034  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.767212  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.767385  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.767522  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.776037  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1030 19:46:57.776386  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.776846  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.776864  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.777184  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.777339  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.778829  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.779118  446736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:57.779138  446736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:46:57.779156  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.781325  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781590  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.781615  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781755  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.781895  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.781995  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.782088  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.895549  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:57.913030  446736 node_ready.go:35] waiting up to 6m0s for node "no-preload-960512" to be "Ready" ...
	I1030 19:46:58.008228  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:58.009206  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:46:58.009222  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:46:58.034347  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:58.036620  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:46:58.036646  446736 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:46:58.140489  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:58.140522  446736 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:46:58.181145  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:59.403246  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.368855241s)
	I1030 19:46:59.403317  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395049308s)
	I1030 19:46:59.403331  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403340  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403356  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403369  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403657  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403673  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403681  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403688  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403766  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403770  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.403778  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403790  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403796  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403939  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403954  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404023  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.404059  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404071  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411114  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.411136  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.411365  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411421  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.411437  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513065  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33186887s)
	I1030 19:46:59.513150  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513168  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513455  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513481  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513486  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513491  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513537  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513769  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513797  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513809  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513826  446736 addons.go:475] Verifying addon metrics-server=true in "no-preload-960512"
	I1030 19:46:59.516354  446736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:46:59.517886  446736 addons.go:510] duration metric: took 1.828322965s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:46:59.916839  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.143829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.644245  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.144327  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.644684  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.644799  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.144222  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.644111  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.144268  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.644631  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.982694  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:00.984014  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:59.439638  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:01.939460  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:02.416750  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:47:03.416443  446736 node_ready.go:49] node "no-preload-960512" has status "Ready":"True"
	I1030 19:47:03.416469  446736 node_ready.go:38] duration metric: took 5.503404181s for node "no-preload-960512" to be "Ready" ...
	I1030 19:47:03.416479  446736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:47:03.422219  446736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:02.143881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.644208  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.144411  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.643948  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.644179  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.144791  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.643983  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.143859  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.644436  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.481239  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.481271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.482108  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:04.439288  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:06.439454  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.428589  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.430975  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:09.928214  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.144765  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.644280  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.144381  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.644099  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.144129  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.643864  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.144105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.643752  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.144135  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.644172  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.982150  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.481265  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:08.939357  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.940087  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.430572  446736 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.430598  446736 pod_ready.go:82] duration metric: took 7.008352985s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.430610  446736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436673  446736 pod_ready.go:93] pod "etcd-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.436699  446736 pod_ready.go:82] duration metric: took 6.082545ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436711  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442262  446736 pod_ready.go:93] pod "kube-apiserver-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.442282  446736 pod_ready.go:82] duration metric: took 5.563816ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442292  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446170  446736 pod_ready.go:93] pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.446189  446736 pod_ready.go:82] duration metric: took 3.890123ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446198  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450190  446736 pod_ready.go:93] pod "kube-proxy-fxqqc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.450216  446736 pod_ready.go:82] duration metric: took 4.011125ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450226  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826537  446736 pod_ready.go:93] pod "kube-scheduler-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.826572  446736 pod_ready.go:82] duration metric: took 376.338504ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826587  446736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:12.834756  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.144391  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.644441  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.143916  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.644779  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.644634  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.144050  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.644738  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:16.143957  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:16.144037  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:16.184282  447486 cri.go:89] found id: ""
	I1030 19:47:16.184310  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.184320  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:16.184327  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:16.184403  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:16.225359  447486 cri.go:89] found id: ""
	I1030 19:47:16.225388  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.225397  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:16.225403  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:16.225471  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:16.260591  447486 cri.go:89] found id: ""
	I1030 19:47:16.260625  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.260635  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:16.260641  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:16.260695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:16.299562  447486 cri.go:89] found id: ""
	I1030 19:47:16.299591  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.299602  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:16.299609  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:16.299682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:16.334753  447486 cri.go:89] found id: ""
	I1030 19:47:16.334781  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.334789  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:16.334795  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:16.334877  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:16.371588  447486 cri.go:89] found id: ""
	I1030 19:47:16.371619  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.371628  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:16.371634  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:16.371689  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:16.406668  447486 cri.go:89] found id: ""
	I1030 19:47:16.406699  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.406710  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:16.406718  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:16.406786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:16.443050  447486 cri.go:89] found id: ""
	I1030 19:47:16.443081  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.443096  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:16.443109  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:16.443125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:16.492898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:16.492936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:16.506310  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:16.506343  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:16.637629  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:16.637660  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:16.637677  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:16.709581  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:16.709621  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:14.481660  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:16.981807  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:13.438777  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.439457  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.939606  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.335280  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.833216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.833320  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.253501  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:19.267200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:19.267276  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:19.303608  447486 cri.go:89] found id: ""
	I1030 19:47:19.303641  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.303651  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:19.303658  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:19.303711  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:19.341311  447486 cri.go:89] found id: ""
	I1030 19:47:19.341343  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.341354  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:19.341363  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:19.341427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:19.376949  447486 cri.go:89] found id: ""
	I1030 19:47:19.376977  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.376987  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:19.376996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:19.377075  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:19.414164  447486 cri.go:89] found id: ""
	I1030 19:47:19.414197  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.414209  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:19.414218  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:19.414308  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:19.450637  447486 cri.go:89] found id: ""
	I1030 19:47:19.450671  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.450683  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:19.450692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:19.450761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:19.485315  447486 cri.go:89] found id: ""
	I1030 19:47:19.485345  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.485355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:19.485364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:19.485427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:19.519873  447486 cri.go:89] found id: ""
	I1030 19:47:19.519901  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.519911  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:19.519919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:19.519982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:19.555168  447486 cri.go:89] found id: ""
	I1030 19:47:19.555198  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.555211  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:19.555223  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:19.555239  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:19.607227  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:19.607265  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:19.621465  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:19.621498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:19.700837  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:19.700869  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:19.700882  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:19.774428  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:19.774468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:18.982345  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:21.482165  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.940122  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.439405  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.333449  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.833942  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.314410  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:22.327998  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:22.328083  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:22.365583  447486 cri.go:89] found id: ""
	I1030 19:47:22.365611  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.365622  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:22.365631  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:22.365694  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:22.398964  447486 cri.go:89] found id: ""
	I1030 19:47:22.398996  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.399008  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:22.399016  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:22.399092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:22.435132  447486 cri.go:89] found id: ""
	I1030 19:47:22.435169  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.435181  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:22.435188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:22.435252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:22.471510  447486 cri.go:89] found id: ""
	I1030 19:47:22.471544  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.471557  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:22.471574  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:22.471630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:22.509611  447486 cri.go:89] found id: ""
	I1030 19:47:22.509639  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.509647  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:22.509653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:22.509707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:22.546502  447486 cri.go:89] found id: ""
	I1030 19:47:22.546539  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.546552  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:22.546560  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:22.546630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:22.584560  447486 cri.go:89] found id: ""
	I1030 19:47:22.584593  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.584605  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:22.584613  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:22.584676  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:22.621421  447486 cri.go:89] found id: ""
	I1030 19:47:22.621461  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.621474  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:22.621486  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:22.621505  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:22.634998  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:22.635038  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:22.711002  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:22.711028  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:22.711047  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:22.790673  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:22.790712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.831804  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:22.831851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.386915  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:25.399854  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:25.399954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:25.438346  447486 cri.go:89] found id: ""
	I1030 19:47:25.438381  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.438406  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:25.438416  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:25.438500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:25.474888  447486 cri.go:89] found id: ""
	I1030 19:47:25.474915  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.474924  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:25.474931  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:25.474994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:25.511925  447486 cri.go:89] found id: ""
	I1030 19:47:25.511955  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.511966  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:25.511973  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:25.512038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:25.551027  447486 cri.go:89] found id: ""
	I1030 19:47:25.551058  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.551067  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:25.551073  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:25.551144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:25.584736  447486 cri.go:89] found id: ""
	I1030 19:47:25.584764  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.584773  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:25.584779  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:25.584847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:25.632765  447486 cri.go:89] found id: ""
	I1030 19:47:25.632798  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.632810  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:25.632818  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:25.632893  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:25.682501  447486 cri.go:89] found id: ""
	I1030 19:47:25.682528  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.682536  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:25.682543  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:25.682591  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:25.728306  447486 cri.go:89] found id: ""
	I1030 19:47:25.728340  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.728352  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:25.728365  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:25.728397  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.781908  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:25.781944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:25.795864  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:25.795899  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:25.868350  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:25.868378  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:25.868392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:25.944244  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:25.944277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:23.981016  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:25.982186  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.942113  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.438568  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.333623  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.334460  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:28.488216  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:28.501481  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:28.501558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:28.536808  447486 cri.go:89] found id: ""
	I1030 19:47:28.536838  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.536849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:28.536857  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:28.536923  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:28.571819  447486 cri.go:89] found id: ""
	I1030 19:47:28.571855  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.571867  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:28.571885  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:28.571966  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:28.605532  447486 cri.go:89] found id: ""
	I1030 19:47:28.605571  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.605582  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:28.605610  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:28.605682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:28.642108  447486 cri.go:89] found id: ""
	I1030 19:47:28.642140  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.642152  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:28.642159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:28.642234  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:28.680036  447486 cri.go:89] found id: ""
	I1030 19:47:28.680065  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.680078  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:28.680086  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:28.680151  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.716135  447486 cri.go:89] found id: ""
	I1030 19:47:28.716162  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.716171  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:28.716177  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:28.716238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:28.752364  447486 cri.go:89] found id: ""
	I1030 19:47:28.752398  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.752406  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:28.752413  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:28.752478  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:28.788396  447486 cri.go:89] found id: ""
	I1030 19:47:28.788434  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.788447  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:28.788461  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:28.788476  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:28.841560  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:28.841595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:28.856134  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:28.856178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:28.930463  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:28.930507  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:28.930527  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:29.013743  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:29.013795  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:31.557942  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:31.573562  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:31.573654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:31.625349  447486 cri.go:89] found id: ""
	I1030 19:47:31.625378  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.625386  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:31.625392  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:31.625442  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:31.689536  447486 cri.go:89] found id: ""
	I1030 19:47:31.689566  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.689574  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:31.689581  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:31.689632  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:31.723758  447486 cri.go:89] found id: ""
	I1030 19:47:31.723794  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.723806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:31.723814  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:31.723890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:31.762671  447486 cri.go:89] found id: ""
	I1030 19:47:31.762698  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.762707  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:31.762713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:31.762761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:31.797658  447486 cri.go:89] found id: ""
	I1030 19:47:31.797686  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.797694  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:31.797702  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:31.797792  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.481158  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:30.981477  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:32.981593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.940019  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.833540  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.334678  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.832186  447486 cri.go:89] found id: ""
	I1030 19:47:31.832217  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.832228  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:31.832236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:31.832298  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:31.866820  447486 cri.go:89] found id: ""
	I1030 19:47:31.866853  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.866866  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:31.866875  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:31.866937  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:31.901888  447486 cri.go:89] found id: ""
	I1030 19:47:31.901913  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.901922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:31.901932  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:31.901944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:31.992343  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:31.992380  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:32.030519  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:32.030559  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:32.084442  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:32.084478  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:32.098919  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:32.098954  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:32.171034  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:34.671243  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:34.685879  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:34.685972  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:34.720657  447486 cri.go:89] found id: ""
	I1030 19:47:34.720686  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.720694  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:34.720700  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:34.720757  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:34.759571  447486 cri.go:89] found id: ""
	I1030 19:47:34.759602  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.759615  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:34.759624  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:34.759685  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:34.795273  447486 cri.go:89] found id: ""
	I1030 19:47:34.795313  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.795322  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:34.795329  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:34.795450  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:34.828999  447486 cri.go:89] found id: ""
	I1030 19:47:34.829035  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.829047  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:34.829054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:34.829158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:34.865620  447486 cri.go:89] found id: ""
	I1030 19:47:34.865661  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.865674  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:34.865682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:34.865753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:34.900768  447486 cri.go:89] found id: ""
	I1030 19:47:34.900801  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.900812  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:34.900820  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:34.900889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:34.945023  447486 cri.go:89] found id: ""
	I1030 19:47:34.945048  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.945057  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:34.945063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:34.945118  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:34.980458  447486 cri.go:89] found id: ""
	I1030 19:47:34.980483  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.980492  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:34.980501  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:34.980514  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:35.052570  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:35.052597  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:35.052613  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:35.133825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:35.133869  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:35.176016  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:35.176063  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:35.228866  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:35.228903  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:34.982702  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.481103  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.438712  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.938856  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.837275  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:39.332612  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.743408  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:37.757472  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:37.757547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:37.794818  447486 cri.go:89] found id: ""
	I1030 19:47:37.794847  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.794856  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:37.794862  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:37.794928  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:37.830025  447486 cri.go:89] found id: ""
	I1030 19:47:37.830064  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.830077  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:37.830086  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:37.830150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:37.864862  447486 cri.go:89] found id: ""
	I1030 19:47:37.864893  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.864902  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:37.864908  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:37.864958  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:37.901650  447486 cri.go:89] found id: ""
	I1030 19:47:37.901699  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.901713  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:37.901722  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:37.901780  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:37.935824  447486 cri.go:89] found id: ""
	I1030 19:47:37.935854  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.935862  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:37.935868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:37.935930  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:37.972774  447486 cri.go:89] found id: ""
	I1030 19:47:37.972805  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.972813  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:37.972819  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:37.972868  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:38.007815  447486 cri.go:89] found id: ""
	I1030 19:47:38.007845  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.007856  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:38.007864  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:38.007931  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:38.042525  447486 cri.go:89] found id: ""
	I1030 19:47:38.042559  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.042571  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:38.042584  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:38.042600  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:38.122022  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:38.122048  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:38.122065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:38.200534  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:38.200575  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:38.240118  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:38.240154  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:38.291936  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:38.291976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:40.806105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:40.821268  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:40.821343  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:40.857151  447486 cri.go:89] found id: ""
	I1030 19:47:40.857186  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.857198  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:40.857207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:40.857266  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:40.893603  447486 cri.go:89] found id: ""
	I1030 19:47:40.893639  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.893648  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:40.893654  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:40.893720  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:40.935294  447486 cri.go:89] found id: ""
	I1030 19:47:40.935330  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.935342  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:40.935349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:40.935418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:40.971509  447486 cri.go:89] found id: ""
	I1030 19:47:40.971536  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.971544  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:40.971550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:40.971610  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:41.009895  447486 cri.go:89] found id: ""
	I1030 19:47:41.009932  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.009941  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:41.009948  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:41.010008  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:41.045170  447486 cri.go:89] found id: ""
	I1030 19:47:41.045208  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.045221  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:41.045229  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:41.045288  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:41.077654  447486 cri.go:89] found id: ""
	I1030 19:47:41.077684  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.077695  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:41.077704  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:41.077771  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:41.111509  447486 cri.go:89] found id: ""
	I1030 19:47:41.111543  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.111552  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:41.111562  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:41.111574  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:41.164939  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:41.164976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:41.178512  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:41.178589  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:41.258783  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:41.258813  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:41.258832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:41.338192  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:41.338230  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:39.481210  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.481439  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:38.938987  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:40.941386  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.333705  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.833502  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.878155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:43.892376  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:43.892452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:43.930556  447486 cri.go:89] found id: ""
	I1030 19:47:43.930594  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.930606  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:43.930614  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:43.930679  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:43.970588  447486 cri.go:89] found id: ""
	I1030 19:47:43.970619  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.970630  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:43.970638  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:43.970706  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:44.005467  447486 cri.go:89] found id: ""
	I1030 19:47:44.005497  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.005506  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:44.005512  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:44.005573  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:44.039126  447486 cri.go:89] found id: ""
	I1030 19:47:44.039164  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.039173  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:44.039179  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:44.039239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:44.072961  447486 cri.go:89] found id: ""
	I1030 19:47:44.072994  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.073006  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:44.073014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:44.073109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:44.105864  447486 cri.go:89] found id: ""
	I1030 19:47:44.105891  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.105900  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:44.105907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:44.105956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:44.138198  447486 cri.go:89] found id: ""
	I1030 19:47:44.138240  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.138250  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:44.138264  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:44.138331  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:44.172529  447486 cri.go:89] found id: ""
	I1030 19:47:44.172558  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.172567  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:44.172577  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:44.172594  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:44.248215  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:44.248254  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:44.286169  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:44.286202  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:44.341129  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:44.341167  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:44.354570  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:44.354597  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:44.427790  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:43.481483  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.482271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.981312  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.440759  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.938783  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.940512  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.332448  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:48.333216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.928728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:46.943068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:46.943154  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:46.978385  447486 cri.go:89] found id: ""
	I1030 19:47:46.978416  447486 logs.go:282] 0 containers: []
	W1030 19:47:46.978428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:46.978436  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:46.978522  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:47.020413  447486 cri.go:89] found id: ""
	I1030 19:47:47.020457  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.020469  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:47.020476  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:47.020547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:47.061492  447486 cri.go:89] found id: ""
	I1030 19:47:47.061526  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.061538  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:47.061547  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:47.061611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:47.097621  447486 cri.go:89] found id: ""
	I1030 19:47:47.097659  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.097670  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:47.097679  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:47.097739  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:47.131740  447486 cri.go:89] found id: ""
	I1030 19:47:47.131769  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.131779  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:47.131785  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:47.131856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:47.167623  447486 cri.go:89] found id: ""
	I1030 19:47:47.167661  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.167674  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:47.167682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:47.167746  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:47.202299  447486 cri.go:89] found id: ""
	I1030 19:47:47.202328  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.202337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:47.202344  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:47.202401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:47.236652  447486 cri.go:89] found id: ""
	I1030 19:47:47.236686  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.236695  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:47.236704  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:47.236716  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:47.289700  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:47.289740  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:47.304929  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:47.304964  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:47.374811  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:47.374842  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:47.374858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:47.449161  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:47.449196  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:49.989730  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:50.002741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:50.002821  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:50.037602  447486 cri.go:89] found id: ""
	I1030 19:47:50.037636  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.037647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:50.037655  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:50.037724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:50.071346  447486 cri.go:89] found id: ""
	I1030 19:47:50.071383  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.071395  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:50.071405  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:50.071473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:50.106657  447486 cri.go:89] found id: ""
	I1030 19:47:50.106698  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.106711  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:50.106719  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:50.106783  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:50.140974  447486 cri.go:89] found id: ""
	I1030 19:47:50.141012  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.141025  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:50.141032  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:50.141105  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:50.177715  447486 cri.go:89] found id: ""
	I1030 19:47:50.177748  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.177756  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:50.177763  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:50.177824  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:50.212234  447486 cri.go:89] found id: ""
	I1030 19:47:50.212263  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.212272  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:50.212278  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:50.212337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:50.250791  447486 cri.go:89] found id: ""
	I1030 19:47:50.250826  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.250835  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:50.250842  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:50.250908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:50.288575  447486 cri.go:89] found id: ""
	I1030 19:47:50.288607  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.288615  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:50.288628  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:50.288643  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:50.358015  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:50.358039  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:50.358054  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:50.433194  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:50.433235  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:50.473485  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:50.473519  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:50.523581  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:50.523618  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:49.981614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:51.982079  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.439717  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.940170  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.333498  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.832848  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:54.833689  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:53.038393  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:53.052835  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:53.052910  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:53.088797  447486 cri.go:89] found id: ""
	I1030 19:47:53.088828  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.088837  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:53.088843  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:53.088897  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:53.124627  447486 cri.go:89] found id: ""
	I1030 19:47:53.124659  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.124668  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:53.124674  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:53.124724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:53.159127  447486 cri.go:89] found id: ""
	I1030 19:47:53.159163  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.159175  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:53.159183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:53.159244  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:53.191770  447486 cri.go:89] found id: ""
	I1030 19:47:53.191801  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.191810  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:53.191817  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:53.191885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:53.227727  447486 cri.go:89] found id: ""
	I1030 19:47:53.227761  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.227774  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:53.227781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:53.227842  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:53.262937  447486 cri.go:89] found id: ""
	I1030 19:47:53.262969  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.262981  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:53.262989  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:53.263060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:53.296070  447486 cri.go:89] found id: ""
	I1030 19:47:53.296113  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.296124  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:53.296133  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:53.296197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:53.332628  447486 cri.go:89] found id: ""
	I1030 19:47:53.332663  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.332674  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:53.332687  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:53.332702  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:53.385004  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:53.385046  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.400139  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:53.400185  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:53.477792  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:53.477826  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:53.477858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:53.553145  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:53.553186  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:56.094454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:56.107827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:56.107900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:56.141701  447486 cri.go:89] found id: ""
	I1030 19:47:56.141739  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.141751  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:56.141763  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:56.141831  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:56.179973  447486 cri.go:89] found id: ""
	I1030 19:47:56.180003  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.180016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:56.180023  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:56.180099  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:56.220456  447486 cri.go:89] found id: ""
	I1030 19:47:56.220486  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.220496  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:56.220503  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:56.220578  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:56.259699  447486 cri.go:89] found id: ""
	I1030 19:47:56.259727  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.259736  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:56.259741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:56.259791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:56.302726  447486 cri.go:89] found id: ""
	I1030 19:47:56.302762  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.302775  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:56.302783  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:56.302850  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:56.339791  447486 cri.go:89] found id: ""
	I1030 19:47:56.339819  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.339828  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:56.339834  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:56.339889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:56.381291  447486 cri.go:89] found id: ""
	I1030 19:47:56.381325  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.381337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:56.381345  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:56.381401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:56.417150  447486 cri.go:89] found id: ""
	I1030 19:47:56.417182  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.417194  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:56.417207  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:56.417227  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:56.466963  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:56.467005  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:56.481528  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:56.481557  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:56.554843  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:56.554872  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:56.554887  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:56.635798  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:56.635846  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:54.480601  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:56.481475  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:55.439618  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.940438  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.337314  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.179829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:59.193083  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:59.193160  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:59.231253  447486 cri.go:89] found id: ""
	I1030 19:47:59.231288  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.231302  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:59.231311  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:59.231382  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:59.265982  447486 cri.go:89] found id: ""
	I1030 19:47:59.266013  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.266022  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:59.266028  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:59.266090  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:59.303724  447486 cri.go:89] found id: ""
	I1030 19:47:59.303761  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.303773  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:59.303781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:59.303848  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:59.342137  447486 cri.go:89] found id: ""
	I1030 19:47:59.342163  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.342172  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:59.342180  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:59.342246  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:59.382652  447486 cri.go:89] found id: ""
	I1030 19:47:59.382684  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.382693  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:59.382700  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:59.382761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:59.422428  447486 cri.go:89] found id: ""
	I1030 19:47:59.422454  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.422463  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:59.422469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:59.422539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:59.464047  447486 cri.go:89] found id: ""
	I1030 19:47:59.464079  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.464089  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:59.464095  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:59.464146  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:59.500658  447486 cri.go:89] found id: ""
	I1030 19:47:59.500693  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.500705  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:59.500716  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:59.500732  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:59.554634  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:59.554679  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:59.567956  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:59.567986  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:59.646305  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:59.646332  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:59.646349  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:59.730008  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:59.730052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:58.486516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.982184  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.439220  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.439945  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:01.832883  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:03.834027  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.274141  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:02.287246  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:02.287320  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:02.322166  447486 cri.go:89] found id: ""
	I1030 19:48:02.322320  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.322336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:02.322346  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:02.322421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:02.358101  447486 cri.go:89] found id: ""
	I1030 19:48:02.358131  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.358140  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:02.358146  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:02.358209  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:02.394812  447486 cri.go:89] found id: ""
	I1030 19:48:02.394898  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.394915  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:02.394924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:02.394990  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:02.429128  447486 cri.go:89] found id: ""
	I1030 19:48:02.429165  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.429177  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:02.429186  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:02.429358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:02.465878  447486 cri.go:89] found id: ""
	I1030 19:48:02.465907  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.465915  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:02.465921  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:02.465973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:02.502758  447486 cri.go:89] found id: ""
	I1030 19:48:02.502794  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.502805  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:02.502813  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:02.502879  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:02.540111  447486 cri.go:89] found id: ""
	I1030 19:48:02.540142  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.540152  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:02.540158  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:02.540222  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:02.574728  447486 cri.go:89] found id: ""
	I1030 19:48:02.574762  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.574774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:02.574787  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:02.574804  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.613333  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:02.613374  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:02.664970  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:02.665013  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:02.679594  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:02.679626  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:02.744184  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:02.744208  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:02.744222  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.326826  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:05.340166  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:05.340232  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:05.376742  447486 cri.go:89] found id: ""
	I1030 19:48:05.376774  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.376789  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:05.376795  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:05.376865  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:05.413981  447486 cri.go:89] found id: ""
	I1030 19:48:05.414026  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.414039  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:05.414047  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:05.414121  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:05.449811  447486 cri.go:89] found id: ""
	I1030 19:48:05.449842  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.449854  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:05.449862  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:05.449925  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:05.502576  447486 cri.go:89] found id: ""
	I1030 19:48:05.502610  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.502622  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:05.502630  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:05.502721  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:05.536747  447486 cri.go:89] found id: ""
	I1030 19:48:05.536778  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.536787  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:05.536793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:05.536857  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:05.570308  447486 cri.go:89] found id: ""
	I1030 19:48:05.570335  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.570344  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:05.570353  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:05.570420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:05.605006  447486 cri.go:89] found id: ""
	I1030 19:48:05.605037  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.605048  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:05.605054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:05.605109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:05.638651  447486 cri.go:89] found id: ""
	I1030 19:48:05.638681  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.638693  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:05.638705  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:05.638720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:05.690734  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:05.690769  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:05.704561  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:05.704588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:05.779426  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:05.779448  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:05.779471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.866320  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:05.866355  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:03.481614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:05.482428  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.981875  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:04.939485  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.438925  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:06.334094  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.834525  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.409454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:08.423687  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:08.423767  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:08.463554  447486 cri.go:89] found id: ""
	I1030 19:48:08.463581  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.463591  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:08.463597  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:08.463654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:08.500159  447486 cri.go:89] found id: ""
	I1030 19:48:08.500186  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.500195  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:08.500200  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:08.500253  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:08.535670  447486 cri.go:89] found id: ""
	I1030 19:48:08.535701  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.535710  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:08.535717  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:08.535785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:08.572921  447486 cri.go:89] found id: ""
	I1030 19:48:08.572958  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.572968  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:08.572975  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:08.573052  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:08.610873  447486 cri.go:89] found id: ""
	I1030 19:48:08.610908  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.610918  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:08.610924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:08.610978  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:08.645430  447486 cri.go:89] found id: ""
	I1030 19:48:08.645458  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.645466  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:08.645475  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:08.645528  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:08.681212  447486 cri.go:89] found id: ""
	I1030 19:48:08.681246  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.681258  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:08.681266  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:08.681332  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:08.716619  447486 cri.go:89] found id: ""
	I1030 19:48:08.716651  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.716661  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:08.716671  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:08.716682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:08.794090  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:08.794134  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.833209  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:08.833251  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:08.884781  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:08.884817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:08.898556  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:08.898586  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:08.967713  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.468230  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:11.482593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:11.482660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:11.518191  447486 cri.go:89] found id: ""
	I1030 19:48:11.518225  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.518235  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:11.518242  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:11.518295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:11.557199  447486 cri.go:89] found id: ""
	I1030 19:48:11.557229  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.557237  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:11.557252  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:11.557323  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:11.595605  447486 cri.go:89] found id: ""
	I1030 19:48:11.595638  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.595650  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:11.595664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:11.595732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:11.634253  447486 cri.go:89] found id: ""
	I1030 19:48:11.634281  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.634295  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:11.634301  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:11.634358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:11.671138  447486 cri.go:89] found id: ""
	I1030 19:48:11.671167  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.671176  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:11.671183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:11.671238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:11.707202  447486 cri.go:89] found id: ""
	I1030 19:48:11.707228  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.707237  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:11.707243  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:11.707302  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:11.745514  447486 cri.go:89] found id: ""
	I1030 19:48:11.745549  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.745561  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:11.745570  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:11.745640  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:11.781403  447486 cri.go:89] found id: ""
	I1030 19:48:11.781438  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.781449  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:11.781458  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:11.781471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:10.486349  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:12.980881  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:09.440261  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.938439  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.332911  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.334382  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.832934  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:11.832972  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:11.853498  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:11.853545  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:11.949365  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.949389  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:11.949405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:12.033776  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:12.033823  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.579536  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:14.593497  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:14.593579  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:14.627853  447486 cri.go:89] found id: ""
	I1030 19:48:14.627886  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.627895  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:14.627902  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:14.627953  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:14.662356  447486 cri.go:89] found id: ""
	I1030 19:48:14.662386  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.662398  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:14.662406  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:14.662481  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:14.699334  447486 cri.go:89] found id: ""
	I1030 19:48:14.699370  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.699382  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:14.699390  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:14.699457  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:14.733884  447486 cri.go:89] found id: ""
	I1030 19:48:14.733924  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.733937  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:14.733946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:14.734025  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:14.775208  447486 cri.go:89] found id: ""
	I1030 19:48:14.775240  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.775249  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:14.775256  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:14.775315  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:14.809663  447486 cri.go:89] found id: ""
	I1030 19:48:14.809695  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.809704  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:14.809711  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:14.809778  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:14.844963  447486 cri.go:89] found id: ""
	I1030 19:48:14.844996  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.845006  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:14.845014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:14.845084  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:14.881236  447486 cri.go:89] found id: ""
	I1030 19:48:14.881273  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.881283  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:14.881293  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:14.881305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:14.933792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:14.933830  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:14.948038  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:14.948065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:15.023497  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:15.023519  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:15.023532  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:15.105682  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:15.105741  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.980949  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.981063  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.940399  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.438545  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:15.834158  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.332452  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:17.646238  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:17.665366  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:17.665455  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:17.707729  447486 cri.go:89] found id: ""
	I1030 19:48:17.707783  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.707796  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:17.707805  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:17.707883  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:17.759922  447486 cri.go:89] found id: ""
	I1030 19:48:17.759959  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.759972  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:17.759980  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:17.760049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:17.807635  447486 cri.go:89] found id: ""
	I1030 19:48:17.807671  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.807683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:17.807695  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:17.807770  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:17.844205  447486 cri.go:89] found id: ""
	I1030 19:48:17.844236  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.844247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:17.844255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:17.844326  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:17.879079  447486 cri.go:89] found id: ""
	I1030 19:48:17.879113  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.879125  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:17.879134  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:17.879202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:17.916548  447486 cri.go:89] found id: ""
	I1030 19:48:17.916584  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.916594  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:17.916601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:17.916654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:17.950597  447486 cri.go:89] found id: ""
	I1030 19:48:17.950626  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.950635  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:17.950640  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:17.950695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:17.985924  447486 cri.go:89] found id: ""
	I1030 19:48:17.985957  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.985968  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:17.985980  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:17.985996  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:18.066211  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:18.066250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:18.107228  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:18.107279  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:18.157508  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:18.157543  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.172208  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:18.172243  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:18.248100  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:20.748681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:20.763369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:20.763445  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:20.804288  447486 cri.go:89] found id: ""
	I1030 19:48:20.804323  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.804336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:20.804343  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:20.804410  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:20.838925  447486 cri.go:89] found id: ""
	I1030 19:48:20.838964  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.838973  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:20.838979  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:20.839030  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:20.873560  447486 cri.go:89] found id: ""
	I1030 19:48:20.873596  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.873608  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:20.873617  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:20.873681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:20.908670  447486 cri.go:89] found id: ""
	I1030 19:48:20.908705  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.908716  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:20.908723  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:20.908791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:20.945901  447486 cri.go:89] found id: ""
	I1030 19:48:20.945929  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.945937  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:20.945943  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:20.945991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:20.980184  447486 cri.go:89] found id: ""
	I1030 19:48:20.980216  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.980227  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:20.980236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:20.980299  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:21.024243  447486 cri.go:89] found id: ""
	I1030 19:48:21.024272  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.024284  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:21.024293  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:21.024366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:21.063315  447486 cri.go:89] found id: ""
	I1030 19:48:21.063348  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.063358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:21.063370  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:21.063387  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:21.130434  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:21.130463  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:21.130480  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:21.209067  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:21.209107  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:21.251005  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:21.251035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:21.303365  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:21.303402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.981952  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.982372  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.439921  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.939869  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.940058  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.333700  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.833845  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.834560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:23.817700  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:23.831060  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:23.831133  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:23.864299  447486 cri.go:89] found id: ""
	I1030 19:48:23.864334  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.864346  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:23.864354  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:23.864420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:23.900815  447486 cri.go:89] found id: ""
	I1030 19:48:23.900844  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.900854  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:23.900869  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:23.900929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:23.939888  447486 cri.go:89] found id: ""
	I1030 19:48:23.939917  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.939928  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:23.939936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:23.939999  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:23.975359  447486 cri.go:89] found id: ""
	I1030 19:48:23.975387  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.975395  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:23.975401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:23.975452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:24.012779  447486 cri.go:89] found id: ""
	I1030 19:48:24.012819  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.012832  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:24.012840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:24.012908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:24.048853  447486 cri.go:89] found id: ""
	I1030 19:48:24.048890  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.048903  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:24.048912  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:24.048979  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:24.084744  447486 cri.go:89] found id: ""
	I1030 19:48:24.084784  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.084797  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:24.084806  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:24.084860  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:24.121719  447486 cri.go:89] found id: ""
	I1030 19:48:24.121757  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.121767  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:24.121777  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:24.121791  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:24.178691  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:24.178733  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:24.192885  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:24.192916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:24.268771  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:24.268815  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:24.268832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:24.349663  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:24.349699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:23.481516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:25.481700  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.481886  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.940106  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.940309  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.334165  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.834162  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.887325  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:26.900480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:26.900558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:26.936157  447486 cri.go:89] found id: ""
	I1030 19:48:26.936188  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.936200  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:26.936207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:26.936278  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:26.975580  447486 cri.go:89] found id: ""
	I1030 19:48:26.975615  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.975626  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:26.975633  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:26.975705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:27.010549  447486 cri.go:89] found id: ""
	I1030 19:48:27.010579  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.010592  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:27.010600  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:27.010659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:27.047505  447486 cri.go:89] found id: ""
	I1030 19:48:27.047541  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.047553  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:27.047561  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:27.047628  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:27.083379  447486 cri.go:89] found id: ""
	I1030 19:48:27.083409  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.083420  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:27.083429  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:27.083492  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:27.117912  447486 cri.go:89] found id: ""
	I1030 19:48:27.117954  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.117967  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:27.117976  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:27.118049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:27.151721  447486 cri.go:89] found id: ""
	I1030 19:48:27.151749  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.151758  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:27.151765  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:27.151817  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:27.188940  447486 cri.go:89] found id: ""
	I1030 19:48:27.188981  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.188989  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:27.188999  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:27.189011  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:27.243926  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:27.243960  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:27.258702  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:27.258731  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:27.326983  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:27.327023  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:27.327041  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:27.410761  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:27.410808  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.953219  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:29.967972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:29.968078  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:30.003975  447486 cri.go:89] found id: ""
	I1030 19:48:30.004004  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.004014  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:30.004023  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:30.004097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:30.041732  447486 cri.go:89] found id: ""
	I1030 19:48:30.041768  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.041780  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:30.041787  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:30.041863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:30.078262  447486 cri.go:89] found id: ""
	I1030 19:48:30.078297  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.078308  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:30.078315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:30.078379  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:30.116100  447486 cri.go:89] found id: ""
	I1030 19:48:30.116137  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.116149  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:30.116157  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:30.116229  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:30.150925  447486 cri.go:89] found id: ""
	I1030 19:48:30.150953  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.150964  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:30.150972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:30.151041  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:30.192188  447486 cri.go:89] found id: ""
	I1030 19:48:30.192219  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.192230  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:30.192237  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:30.192314  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:30.231144  447486 cri.go:89] found id: ""
	I1030 19:48:30.231180  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.231192  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:30.231200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:30.231277  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:30.271198  447486 cri.go:89] found id: ""
	I1030 19:48:30.271228  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.271242  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:30.271265  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:30.271277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:30.322750  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:30.322792  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:30.337745  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:30.337774  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:30.417198  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:30.417224  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:30.417240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:30.503327  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:30.503364  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.982893  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.482051  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.440509  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:31.939517  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.333571  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.833482  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:33.047719  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:33.062330  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:33.062395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:33.101049  447486 cri.go:89] found id: ""
	I1030 19:48:33.101088  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.101101  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:33.101108  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:33.101175  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:33.135236  447486 cri.go:89] found id: ""
	I1030 19:48:33.135268  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.135279  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:33.135286  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:33.135357  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:33.169279  447486 cri.go:89] found id: ""
	I1030 19:48:33.169314  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.169325  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:33.169333  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:33.169401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:33.203336  447486 cri.go:89] found id: ""
	I1030 19:48:33.203380  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.203392  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:33.203401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:33.203470  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:33.238223  447486 cri.go:89] found id: ""
	I1030 19:48:33.238258  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.238270  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:33.238279  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:33.238345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:33.272891  447486 cri.go:89] found id: ""
	I1030 19:48:33.272925  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.272937  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:33.272946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:33.273014  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:33.312452  447486 cri.go:89] found id: ""
	I1030 19:48:33.312480  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.312489  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:33.312496  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:33.312547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:33.349041  447486 cri.go:89] found id: ""
	I1030 19:48:33.349076  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.349091  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:33.349104  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:33.349130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:33.430888  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:33.430940  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.469414  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:33.469444  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:33.518989  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:33.519022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:33.532656  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:33.532690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:33.605896  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.106207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:36.120564  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:36.120646  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:36.156854  447486 cri.go:89] found id: ""
	I1030 19:48:36.156887  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.156900  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:36.156909  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:36.156988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:36.195027  447486 cri.go:89] found id: ""
	I1030 19:48:36.195059  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.195072  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:36.195080  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:36.195150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:36.235639  447486 cri.go:89] found id: ""
	I1030 19:48:36.235672  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.235683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:36.235692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:36.235758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:36.281659  447486 cri.go:89] found id: ""
	I1030 19:48:36.281693  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.281702  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:36.281709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:36.281762  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:36.315427  447486 cri.go:89] found id: ""
	I1030 19:48:36.315454  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.315463  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:36.315469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:36.315531  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:36.353084  447486 cri.go:89] found id: ""
	I1030 19:48:36.353110  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.353120  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:36.353126  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:36.353197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:36.388497  447486 cri.go:89] found id: ""
	I1030 19:48:36.388533  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.388545  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:36.388553  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:36.388616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:36.423625  447486 cri.go:89] found id: ""
	I1030 19:48:36.423658  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.423667  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:36.423676  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:36.423691  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:36.476722  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:36.476757  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:36.490669  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:36.490700  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:36.558587  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.558621  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:36.558639  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:36.635606  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:36.635654  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:34.482414  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.981552  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.439796  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.938335  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:37.333231  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.333707  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.174007  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:39.187709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:39.187786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:39.226131  447486 cri.go:89] found id: ""
	I1030 19:48:39.226165  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.226177  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:39.226185  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:39.226265  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:39.265963  447486 cri.go:89] found id: ""
	I1030 19:48:39.266003  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.266016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:39.266024  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:39.266092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:39.302586  447486 cri.go:89] found id: ""
	I1030 19:48:39.302624  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.302637  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:39.302645  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:39.302710  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:39.347869  447486 cri.go:89] found id: ""
	I1030 19:48:39.347903  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.347916  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:39.347924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:39.347994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:39.384252  447486 cri.go:89] found id: ""
	I1030 19:48:39.384280  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.384288  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:39.384294  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:39.384347  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:39.418847  447486 cri.go:89] found id: ""
	I1030 19:48:39.418876  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.418885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:39.418891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:39.418950  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:39.458408  447486 cri.go:89] found id: ""
	I1030 19:48:39.458454  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.458467  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:39.458480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:39.458567  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:39.493889  447486 cri.go:89] found id: ""
	I1030 19:48:39.493923  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.493934  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:39.493946  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:39.493959  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:39.548692  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:39.548746  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:39.562083  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:39.562110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:39.633822  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:39.633845  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:39.633857  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:39.711765  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:39.711814  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:39.482010  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.981380  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:38.939254  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:40.940318  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.832456  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.832780  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:42.254337  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:42.268137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:42.268202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:42.303383  447486 cri.go:89] found id: ""
	I1030 19:48:42.303418  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.303428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:42.303434  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:42.303501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:42.349405  447486 cri.go:89] found id: ""
	I1030 19:48:42.349437  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.349447  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:42.349453  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:42.349504  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:42.384317  447486 cri.go:89] found id: ""
	I1030 19:48:42.384353  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.384363  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:42.384369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:42.384424  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:42.418712  447486 cri.go:89] found id: ""
	I1030 19:48:42.418759  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.418768  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:42.418775  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:42.418833  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:42.454234  447486 cri.go:89] found id: ""
	I1030 19:48:42.454270  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.454280  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:42.454288  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:42.454362  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:42.488813  447486 cri.go:89] found id: ""
	I1030 19:48:42.488845  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.488855  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:42.488863  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:42.488929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:42.525883  447486 cri.go:89] found id: ""
	I1030 19:48:42.525917  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.525929  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:42.525938  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:42.526006  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:42.561197  447486 cri.go:89] found id: ""
	I1030 19:48:42.561233  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.561246  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:42.561259  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:42.561275  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.599818  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:42.599854  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:42.654341  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:42.654382  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:42.668163  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:42.668188  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:42.739630  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:42.739659  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:42.739671  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.316154  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:45.330372  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:45.330454  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:45.369093  447486 cri.go:89] found id: ""
	I1030 19:48:45.369125  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.369135  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:45.369141  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:45.369192  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:45.407681  447486 cri.go:89] found id: ""
	I1030 19:48:45.407715  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.407726  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:45.407732  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:45.407787  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:45.444445  447486 cri.go:89] found id: ""
	I1030 19:48:45.444474  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.444482  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:45.444488  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:45.444539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:45.481538  447486 cri.go:89] found id: ""
	I1030 19:48:45.481570  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.481583  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:45.481591  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:45.481654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:45.515088  447486 cri.go:89] found id: ""
	I1030 19:48:45.515123  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.515132  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:45.515139  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:45.515195  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:45.550085  447486 cri.go:89] found id: ""
	I1030 19:48:45.550133  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.550145  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:45.550152  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:45.550214  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:45.583950  447486 cri.go:89] found id: ""
	I1030 19:48:45.583985  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.583999  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:45.584008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:45.584082  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:45.617320  447486 cri.go:89] found id: ""
	I1030 19:48:45.617349  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.617358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:45.617369  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:45.617389  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:45.668792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:45.668833  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:45.683144  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:45.683178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:45.758707  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:45.758732  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:45.758744  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.833807  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:45.833837  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:43.982806  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:46.480452  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.440702  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.938267  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:47.938396  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.833319  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.332420  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.374096  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:48.387812  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:48.387903  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:48.426958  447486 cri.go:89] found id: ""
	I1030 19:48:48.426987  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.426996  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:48.427002  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:48.427051  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:48.462216  447486 cri.go:89] found id: ""
	I1030 19:48:48.462249  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.462260  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:48.462268  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:48.462336  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:48.495666  447486 cri.go:89] found id: ""
	I1030 19:48:48.495699  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.495709  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:48.495716  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:48.495798  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:48.530653  447486 cri.go:89] found id: ""
	I1030 19:48:48.530686  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.530698  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:48.530709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:48.530777  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:48.564788  447486 cri.go:89] found id: ""
	I1030 19:48:48.564826  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.564838  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:48.564846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:48.564921  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:48.600735  447486 cri.go:89] found id: ""
	I1030 19:48:48.600772  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.600784  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:48.600793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:48.600863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:48.637063  447486 cri.go:89] found id: ""
	I1030 19:48:48.637095  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.637107  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:48.637115  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:48.637182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:48.673279  447486 cri.go:89] found id: ""
	I1030 19:48:48.673314  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.673334  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:48.673347  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:48.673362  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:48.724239  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:48.724280  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:48.738390  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:48.738425  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:48.812130  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:48.812155  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:48.812171  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:48.896253  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:48.896298  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.441155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:51.454675  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:51.454751  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:51.490464  447486 cri.go:89] found id: ""
	I1030 19:48:51.490511  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.490523  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:51.490532  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:51.490600  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:51.525364  447486 cri.go:89] found id: ""
	I1030 19:48:51.525399  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.525411  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:51.525419  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:51.525485  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:51.559028  447486 cri.go:89] found id: ""
	I1030 19:48:51.559062  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.559071  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:51.559078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:51.559139  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:51.595188  447486 cri.go:89] found id: ""
	I1030 19:48:51.595217  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.595225  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:51.595231  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:51.595300  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:51.628987  447486 cri.go:89] found id: ""
	I1030 19:48:51.629023  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.629039  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:51.629047  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:51.629119  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:51.663257  447486 cri.go:89] found id: ""
	I1030 19:48:51.663286  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.663295  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:51.663303  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:51.663368  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:51.712562  447486 cri.go:89] found id: ""
	I1030 19:48:51.712600  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.712613  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:51.712622  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:51.712684  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:51.761730  447486 cri.go:89] found id: ""
	I1030 19:48:51.761760  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.761769  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:51.761779  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:51.761794  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:51.775595  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:51.775624  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:48:48.481851  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.980723  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.982177  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:49.939273  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:51.939972  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.333451  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.333773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:54.835087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:48:51.849120  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:51.849144  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:51.849157  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:51.931364  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:51.931403  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.971195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:51.971229  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:54.525136  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:54.539137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:54.539227  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:54.574281  447486 cri.go:89] found id: ""
	I1030 19:48:54.574316  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.574339  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:54.574348  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:54.574420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:54.611109  447486 cri.go:89] found id: ""
	I1030 19:48:54.611149  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.611161  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:54.611170  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:54.611230  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:54.648396  447486 cri.go:89] found id: ""
	I1030 19:48:54.648428  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.648439  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:54.648447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:54.648510  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:54.683834  447486 cri.go:89] found id: ""
	I1030 19:48:54.683871  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.683884  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:54.683892  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:54.683954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:54.717391  447486 cri.go:89] found id: ""
	I1030 19:48:54.717421  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.717430  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:54.717436  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:54.717495  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:54.753783  447486 cri.go:89] found id: ""
	I1030 19:48:54.753812  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.753821  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:54.753827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:54.753878  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:54.788231  447486 cri.go:89] found id: ""
	I1030 19:48:54.788270  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.788282  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:54.788291  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:54.788359  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:54.823949  447486 cri.go:89] found id: ""
	I1030 19:48:54.823989  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.824001  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:54.824014  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:54.824052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:54.838936  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:54.838967  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:54.911785  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:54.911812  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:54.911825  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:54.993268  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:54.993302  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:55.032557  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:55.032588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:55.481330  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.482183  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:53.940343  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:56.439870  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.333262  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:59.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.588726  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:57.603010  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:57.603085  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:57.636499  447486 cri.go:89] found id: ""
	I1030 19:48:57.636531  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.636542  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:57.636551  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:57.636624  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:57.671698  447486 cri.go:89] found id: ""
	I1030 19:48:57.671728  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.671739  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:57.671748  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:57.671815  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:57.707387  447486 cri.go:89] found id: ""
	I1030 19:48:57.707414  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.707422  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:57.707431  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:57.707482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:57.745404  447486 cri.go:89] found id: ""
	I1030 19:48:57.745432  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.745440  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:57.745447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:57.745507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:57.784874  447486 cri.go:89] found id: ""
	I1030 19:48:57.784903  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.784912  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:57.784919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:57.784984  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:57.824663  447486 cri.go:89] found id: ""
	I1030 19:48:57.824697  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.824707  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:57.824713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:57.824773  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:57.862542  447486 cri.go:89] found id: ""
	I1030 19:48:57.862581  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.862593  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:57.862601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:57.862669  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:57.897901  447486 cri.go:89] found id: ""
	I1030 19:48:57.897935  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.897947  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:57.897959  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:57.897974  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.951898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:57.951936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:57.966282  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:57.966327  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:58.035515  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:58.035546  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:58.035562  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:58.114825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:58.114876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:00.705537  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:00.719589  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:00.719672  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:00.762299  447486 cri.go:89] found id: ""
	I1030 19:49:00.762330  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.762338  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:00.762356  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:00.762438  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:00.802228  447486 cri.go:89] found id: ""
	I1030 19:49:00.802259  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.802268  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:00.802275  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:00.802345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:00.836531  447486 cri.go:89] found id: ""
	I1030 19:49:00.836557  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.836565  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:00.836572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:00.836630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:00.869332  447486 cri.go:89] found id: ""
	I1030 19:49:00.869360  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.869369  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:00.869375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:00.869437  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:00.904643  447486 cri.go:89] found id: ""
	I1030 19:49:00.904675  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.904684  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:00.904691  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:00.904768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:00.939020  447486 cri.go:89] found id: ""
	I1030 19:49:00.939050  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.939061  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:00.939068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:00.939142  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:00.974586  447486 cri.go:89] found id: ""
	I1030 19:49:00.974625  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.974638  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:00.974646  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:00.974707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:01.009337  447486 cri.go:89] found id: ""
	I1030 19:49:01.009375  447486 logs.go:282] 0 containers: []
	W1030 19:49:01.009386  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:01.009399  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:01.009416  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:01.067087  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:01.067125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:01.081681  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:01.081713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:01.153057  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:01.153082  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:01.153096  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:01.236113  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:01.236153  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:59.981252  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.981799  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:58.938430  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:00.940905  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.333854  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.334325  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.774056  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:03.788395  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:03.788482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:03.823847  447486 cri.go:89] found id: ""
	I1030 19:49:03.823880  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.823892  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:03.823900  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:03.823973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:03.864776  447486 cri.go:89] found id: ""
	I1030 19:49:03.864807  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.864819  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:03.864827  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:03.864890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:03.912516  447486 cri.go:89] found id: ""
	I1030 19:49:03.912572  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.912585  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:03.912593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:03.912660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:03.962459  447486 cri.go:89] found id: ""
	I1030 19:49:03.962509  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.962521  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:03.962530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:03.962602  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:04.019107  447486 cri.go:89] found id: ""
	I1030 19:49:04.019143  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.019152  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:04.019159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:04.019217  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:04.054016  447486 cri.go:89] found id: ""
	I1030 19:49:04.054047  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.054056  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:04.054063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:04.054140  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:04.089907  447486 cri.go:89] found id: ""
	I1030 19:49:04.089938  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.089948  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:04.089955  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:04.090007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:04.128081  447486 cri.go:89] found id: ""
	I1030 19:49:04.128110  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.128118  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:04.128128  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:04.128142  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:04.182419  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:04.182462  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:04.196909  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:04.196941  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:04.267267  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:04.267298  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:04.267317  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:04.346826  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:04.346876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:03.984259  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.481362  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.438786  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.938707  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.939642  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.334541  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.834233  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.887266  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:06.902462  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:06.902554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:06.938850  447486 cri.go:89] found id: ""
	I1030 19:49:06.938880  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.938891  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:06.938899  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:06.938961  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:06.983284  447486 cri.go:89] found id: ""
	I1030 19:49:06.983315  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.983330  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:06.983339  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:06.983406  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:07.016332  447486 cri.go:89] found id: ""
	I1030 19:49:07.016359  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.016369  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:07.016375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:07.016428  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:07.051425  447486 cri.go:89] found id: ""
	I1030 19:49:07.051459  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.051471  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:07.051480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:07.051550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:07.083396  447486 cri.go:89] found id: ""
	I1030 19:49:07.083429  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.083437  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:07.083444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:07.083507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:07.116616  447486 cri.go:89] found id: ""
	I1030 19:49:07.116646  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.116654  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:07.116661  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:07.116728  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:07.149219  447486 cri.go:89] found id: ""
	I1030 19:49:07.149251  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.149259  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:07.149265  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:07.149318  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:07.188404  447486 cri.go:89] found id: ""
	I1030 19:49:07.188435  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.188444  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:07.188454  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:07.188468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:07.247600  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:07.247640  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:07.262196  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:07.262231  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:07.332998  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:07.333031  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:07.333048  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:07.415322  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:07.415367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:09.958278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:09.972983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:09.973068  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:10.016768  447486 cri.go:89] found id: ""
	I1030 19:49:10.016801  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.016810  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:10.016818  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:10.016885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:10.052958  447486 cri.go:89] found id: ""
	I1030 19:49:10.052992  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.053002  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:10.053009  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:10.053063  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:10.089062  447486 cri.go:89] found id: ""
	I1030 19:49:10.089094  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.089105  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:10.089120  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:10.089196  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:10.126084  447486 cri.go:89] found id: ""
	I1030 19:49:10.126114  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.126123  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:10.126130  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:10.126182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:10.171670  447486 cri.go:89] found id: ""
	I1030 19:49:10.171702  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.171712  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:10.171720  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:10.171785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:10.210243  447486 cri.go:89] found id: ""
	I1030 19:49:10.210285  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.210293  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:10.210300  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:10.210366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:10.253012  447486 cri.go:89] found id: ""
	I1030 19:49:10.253056  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.253069  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:10.253078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:10.253155  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:10.287948  447486 cri.go:89] found id: ""
	I1030 19:49:10.287999  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.288009  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:10.288021  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:10.288036  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:10.341362  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:10.341405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:10.355769  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:10.355798  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:10.429469  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:10.429500  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:10.429518  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:10.509812  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:10.509851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:08.488059  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.981606  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.982128  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.438903  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.939592  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.334087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.336238  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:14.833365  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:13.053064  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:13.069063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:13.069136  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:13.108457  447486 cri.go:89] found id: ""
	I1030 19:49:13.108492  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.108505  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:13.108513  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:13.108582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:13.146481  447486 cri.go:89] found id: ""
	I1030 19:49:13.146523  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.146534  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:13.146542  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:13.146595  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:13.187088  447486 cri.go:89] found id: ""
	I1030 19:49:13.187118  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.187129  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:13.187137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:13.187200  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:13.226913  447486 cri.go:89] found id: ""
	I1030 19:49:13.226948  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.226960  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:13.226968  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:13.227038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:13.262632  447486 cri.go:89] found id: ""
	I1030 19:49:13.262661  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.262669  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:13.262676  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:13.262726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:13.296877  447486 cri.go:89] found id: ""
	I1030 19:49:13.296906  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.296915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:13.296922  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:13.296983  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:13.334907  447486 cri.go:89] found id: ""
	I1030 19:49:13.334939  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.334949  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:13.334956  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:13.335021  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:13.369386  447486 cri.go:89] found id: ""
	I1030 19:49:13.369430  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.369443  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:13.369456  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:13.369472  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:13.423095  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:13.423130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:13.437039  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:13.437067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:13.512619  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:13.512648  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:13.512663  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:13.596982  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:13.597023  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:16.135623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:16.150407  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:16.150502  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:16.188771  447486 cri.go:89] found id: ""
	I1030 19:49:16.188811  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.188823  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:16.188832  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:16.188907  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:16.221554  447486 cri.go:89] found id: ""
	I1030 19:49:16.221589  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.221598  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:16.221604  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:16.221655  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:16.255567  447486 cri.go:89] found id: ""
	I1030 19:49:16.255595  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.255609  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:16.255616  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:16.255667  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:16.289820  447486 cri.go:89] found id: ""
	I1030 19:49:16.289855  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.289866  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:16.289874  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:16.289935  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:16.324415  447486 cri.go:89] found id: ""
	I1030 19:49:16.324449  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.324464  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:16.324471  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:16.324533  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:16.360789  447486 cri.go:89] found id: ""
	I1030 19:49:16.360825  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.360848  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:16.360856  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:16.360922  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:16.395066  447486 cri.go:89] found id: ""
	I1030 19:49:16.395093  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.395101  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:16.395107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:16.395158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:16.429220  447486 cri.go:89] found id: ""
	I1030 19:49:16.429261  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.429273  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:16.429286  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:16.429305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:16.481209  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:16.481250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:16.495353  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:16.495383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:16.563979  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:16.564006  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:16.564022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:16.645166  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:16.645205  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:15.481438  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.482846  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:15.440389  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.938724  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:16.833433  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.335773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.185478  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:19.199270  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:19.199337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:19.242426  447486 cri.go:89] found id: ""
	I1030 19:49:19.242455  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.242464  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:19.242474  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:19.242556  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:19.284061  447486 cri.go:89] found id: ""
	I1030 19:49:19.284092  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.284102  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:19.284108  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:19.284178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:19.317373  447486 cri.go:89] found id: ""
	I1030 19:49:19.317407  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.317420  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:19.317428  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:19.317491  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:19.354222  447486 cri.go:89] found id: ""
	I1030 19:49:19.354250  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.354259  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:19.354267  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:19.354329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:19.392948  447486 cri.go:89] found id: ""
	I1030 19:49:19.392980  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.392989  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:19.392996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:19.393053  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:19.438023  447486 cri.go:89] found id: ""
	I1030 19:49:19.438055  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.438066  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:19.438074  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:19.438144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:19.472179  447486 cri.go:89] found id: ""
	I1030 19:49:19.472208  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.472218  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:19.472226  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:19.472283  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:19.507164  447486 cri.go:89] found id: ""
	I1030 19:49:19.507195  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.507203  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:19.507213  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:19.507226  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:19.520898  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:19.520935  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:19.592204  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:19.592234  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:19.592263  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:19.668994  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:19.669045  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.707208  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:19.707240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:19.981085  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.981344  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.939994  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.439696  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.833592  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.333379  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.263035  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:22.276999  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:22.277089  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:22.310969  447486 cri.go:89] found id: ""
	I1030 19:49:22.311006  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.311017  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:22.311026  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:22.311097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:22.346282  447486 cri.go:89] found id: ""
	I1030 19:49:22.346311  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.346324  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:22.346332  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:22.346401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:22.384324  447486 cri.go:89] found id: ""
	I1030 19:49:22.384354  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.384372  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:22.384381  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:22.384441  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:22.419465  447486 cri.go:89] found id: ""
	I1030 19:49:22.419498  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.419509  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:22.419518  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:22.419582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:22.456161  447486 cri.go:89] found id: ""
	I1030 19:49:22.456196  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.456204  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:22.456211  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:22.456280  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:22.489075  447486 cri.go:89] found id: ""
	I1030 19:49:22.489102  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.489110  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:22.489119  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:22.489181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:22.521752  447486 cri.go:89] found id: ""
	I1030 19:49:22.521780  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.521789  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:22.521796  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:22.521847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:22.554946  447486 cri.go:89] found id: ""
	I1030 19:49:22.554985  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.554997  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:22.555010  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:22.555025  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:22.567877  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:22.567909  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:22.640062  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:22.640094  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:22.640110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:22.714946  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:22.714985  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:22.755560  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:22.755595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.306379  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:25.320883  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:25.320963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:25.356737  447486 cri.go:89] found id: ""
	I1030 19:49:25.356771  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.356782  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:25.356791  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:25.356856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:25.393371  447486 cri.go:89] found id: ""
	I1030 19:49:25.393409  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.393420  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:25.393429  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:25.393500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:25.428379  447486 cri.go:89] found id: ""
	I1030 19:49:25.428411  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.428425  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:25.428433  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:25.428505  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:25.473516  447486 cri.go:89] found id: ""
	I1030 19:49:25.473551  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.473562  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:25.473572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:25.473649  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:25.512508  447486 cri.go:89] found id: ""
	I1030 19:49:25.512535  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.512544  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:25.512550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:25.512611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:25.547646  447486 cri.go:89] found id: ""
	I1030 19:49:25.547691  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.547705  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:25.547713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:25.547782  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:25.582314  447486 cri.go:89] found id: ""
	I1030 19:49:25.582347  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.582356  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:25.582364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:25.582415  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:25.617305  447486 cri.go:89] found id: ""
	I1030 19:49:25.617343  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.617354  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:25.617367  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:25.617383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:25.658245  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:25.658283  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.710559  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:25.710598  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:25.724961  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:25.724995  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:25.796252  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:25.796283  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:25.796300  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:23.984899  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:25.985999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.939599  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:27.440032  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:26.334407  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.334588  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.374633  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:28.389468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:28.389549  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:28.425747  447486 cri.go:89] found id: ""
	I1030 19:49:28.425780  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.425792  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:28.425800  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:28.425956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:28.465221  447486 cri.go:89] found id: ""
	I1030 19:49:28.465258  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.465291  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:28.465303  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:28.465371  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:28.504184  447486 cri.go:89] found id: ""
	I1030 19:49:28.504217  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.504230  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:28.504240  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:28.504295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:28.536198  447486 cri.go:89] found id: ""
	I1030 19:49:28.536234  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.536247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:28.536255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:28.536340  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:28.572194  447486 cri.go:89] found id: ""
	I1030 19:49:28.572228  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.572240  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:28.572248  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:28.572312  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:28.608794  447486 cri.go:89] found id: ""
	I1030 19:49:28.608826  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.608838  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:28.608846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:28.608914  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:28.641664  447486 cri.go:89] found id: ""
	I1030 19:49:28.641698  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.641706  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:28.641714  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:28.641768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:28.675756  447486 cri.go:89] found id: ""
	I1030 19:49:28.675790  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.675800  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:28.675812  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:28.675829  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:28.690203  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:28.690237  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:28.755647  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:28.755674  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:28.755690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.837116  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:28.837149  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:28.877195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:28.877232  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.428091  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:31.442537  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:31.442619  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:31.479911  447486 cri.go:89] found id: ""
	I1030 19:49:31.479942  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.479953  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:31.479961  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:31.480029  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:31.517015  447486 cri.go:89] found id: ""
	I1030 19:49:31.517042  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.517050  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:31.517056  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:31.517107  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:31.549858  447486 cri.go:89] found id: ""
	I1030 19:49:31.549891  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.549900  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:31.549907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:31.549971  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:31.583490  447486 cri.go:89] found id: ""
	I1030 19:49:31.583524  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.583536  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:31.583551  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:31.583618  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:31.618270  447486 cri.go:89] found id: ""
	I1030 19:49:31.618308  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.618320  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:31.618328  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:31.618397  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:31.655416  447486 cri.go:89] found id: ""
	I1030 19:49:31.655448  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.655460  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:31.655468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:31.655530  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:31.689708  447486 cri.go:89] found id: ""
	I1030 19:49:31.689740  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.689751  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:31.689759  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:31.689823  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:31.724179  447486 cri.go:89] found id: ""
	I1030 19:49:31.724208  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.724219  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:31.724233  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:31.724249  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.774900  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:31.774939  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:31.788606  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:31.788635  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:28.481673  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.980999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:32.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:29.938506  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:31.940276  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.834322  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:33.333091  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:49:31.861360  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:31.861385  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:31.861398  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:31.935856  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:31.935896  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.477313  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:34.491530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:34.491597  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:34.525105  447486 cri.go:89] found id: ""
	I1030 19:49:34.525136  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.525145  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:34.525153  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:34.525215  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:34.560449  447486 cri.go:89] found id: ""
	I1030 19:49:34.560483  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.560495  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:34.560503  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:34.560558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:34.595278  447486 cri.go:89] found id: ""
	I1030 19:49:34.595325  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.595335  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:34.595342  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:34.595395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:34.628486  447486 cri.go:89] found id: ""
	I1030 19:49:34.628521  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.628533  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:34.628542  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:34.628614  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:34.663410  447486 cri.go:89] found id: ""
	I1030 19:49:34.663438  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.663448  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:34.663456  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:34.663520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:34.697053  447486 cri.go:89] found id: ""
	I1030 19:49:34.697086  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.697099  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:34.697107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:34.697178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:34.730910  447486 cri.go:89] found id: ""
	I1030 19:49:34.730943  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.730955  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:34.730963  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:34.731034  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:34.765725  447486 cri.go:89] found id: ""
	I1030 19:49:34.765762  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.765774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:34.765786  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:34.765807  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.802750  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:34.802786  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:34.853576  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:34.853614  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:34.868102  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:34.868139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:34.939985  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:34.940015  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:34.940027  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:35.480658  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.481068  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:34.442576  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:36.940088  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:35.333400  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.334425  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.833330  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.516479  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:37.529386  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:37.529453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:37.565889  447486 cri.go:89] found id: ""
	I1030 19:49:37.565923  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.565936  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:37.565945  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:37.566007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:37.598771  447486 cri.go:89] found id: ""
	I1030 19:49:37.598801  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.598811  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:37.598817  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:37.598869  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:37.632678  447486 cri.go:89] found id: ""
	I1030 19:49:37.632705  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.632714  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:37.632735  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:37.632795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:37.666642  447486 cri.go:89] found id: ""
	I1030 19:49:37.666673  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.666682  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:37.666688  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:37.666748  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:37.701203  447486 cri.go:89] found id: ""
	I1030 19:49:37.701233  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.701242  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:37.701249  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:37.701324  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:37.735614  447486 cri.go:89] found id: ""
	I1030 19:49:37.735649  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.735661  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:37.735669  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:37.735738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:37.771381  447486 cri.go:89] found id: ""
	I1030 19:49:37.771418  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.771430  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:37.771439  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:37.771501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:37.807870  447486 cri.go:89] found id: ""
	I1030 19:49:37.807908  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.807922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:37.807935  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:37.807952  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:37.860334  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:37.860367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:37.874340  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:37.874371  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:37.952874  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:37.952903  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:37.952916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:38.045318  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:38.045356  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:40.591278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:40.604970  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:40.605050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:40.639839  447486 cri.go:89] found id: ""
	I1030 19:49:40.639869  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.639880  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:40.639889  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:40.639952  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:40.674046  447486 cri.go:89] found id: ""
	I1030 19:49:40.674077  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.674087  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:40.674093  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:40.674164  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:40.710759  447486 cri.go:89] found id: ""
	I1030 19:49:40.710794  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.710806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:40.710815  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:40.710880  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:40.752439  447486 cri.go:89] found id: ""
	I1030 19:49:40.752471  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.752484  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:40.752493  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:40.752548  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:40.787985  447486 cri.go:89] found id: ""
	I1030 19:49:40.788021  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.788034  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:40.788042  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:40.788102  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:40.829282  447486 cri.go:89] found id: ""
	I1030 19:49:40.829320  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.829333  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:40.829341  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:40.829409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:40.863911  447486 cri.go:89] found id: ""
	I1030 19:49:40.863944  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.863953  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:40.863959  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:40.864026  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:40.901239  447486 cri.go:89] found id: ""
	I1030 19:49:40.901275  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.901287  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:40.901300  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:40.901321  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:40.955283  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:40.955323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:40.968733  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:40.968766  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:41.040213  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:41.040242  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:41.040256  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:41.125992  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:41.126035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:39.481593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.483403  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.441009  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.939182  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.834082  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:44.332428  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.667949  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:43.681633  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:43.681705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:43.725038  447486 cri.go:89] found id: ""
	I1030 19:49:43.725076  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.725085  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:43.725091  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:43.725149  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.761438  447486 cri.go:89] found id: ""
	I1030 19:49:43.761473  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.761486  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:43.761494  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:43.761566  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:43.795299  447486 cri.go:89] found id: ""
	I1030 19:49:43.795335  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.795347  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:43.795355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:43.795431  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:43.830545  447486 cri.go:89] found id: ""
	I1030 19:49:43.830582  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.830594  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:43.830601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:43.830670  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:43.867632  447486 cri.go:89] found id: ""
	I1030 19:49:43.867664  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.867676  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:43.867684  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:43.867753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:43.901315  447486 cri.go:89] found id: ""
	I1030 19:49:43.901346  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.901355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:43.901361  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:43.901412  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:43.934928  447486 cri.go:89] found id: ""
	I1030 19:49:43.934963  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.934975  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:43.934983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:43.935048  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:43.975407  447486 cri.go:89] found id: ""
	I1030 19:49:43.975441  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.975451  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:43.975472  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:43.975497  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:44.019281  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:44.019310  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:44.072363  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:44.072402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:44.085508  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:44.085538  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:44.159634  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:44.159666  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:44.159682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:46.739662  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:46.753190  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:46.753252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:46.790167  447486 cri.go:89] found id: ""
	I1030 19:49:46.790202  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.790211  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:46.790217  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:46.790272  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.988689  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.481139  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.939246  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.438847  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.333066  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.335463  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.828187  447486 cri.go:89] found id: ""
	I1030 19:49:46.828221  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.828230  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:46.828237  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:46.828305  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:46.865499  447486 cri.go:89] found id: ""
	I1030 19:49:46.865539  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.865551  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:46.865559  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:46.865612  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:46.899591  447486 cri.go:89] found id: ""
	I1030 19:49:46.899616  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.899625  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:46.899632  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:46.899681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:46.934818  447486 cri.go:89] found id: ""
	I1030 19:49:46.934850  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.934860  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:46.934868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:46.934933  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:46.971298  447486 cri.go:89] found id: ""
	I1030 19:49:46.971328  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.971340  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:46.971349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:46.971418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:47.010783  447486 cri.go:89] found id: ""
	I1030 19:49:47.010814  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.010825  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:47.010832  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:47.010896  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:47.044343  447486 cri.go:89] found id: ""
	I1030 19:49:47.044380  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.044392  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:47.044405  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:47.044421  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:47.094425  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:47.094459  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:47.110339  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:47.110368  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:47.183262  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:47.183290  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:47.183305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:47.262611  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:47.262651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:49.808195  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:49.821889  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:49.821963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:49.857296  447486 cri.go:89] found id: ""
	I1030 19:49:49.857339  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.857351  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:49.857359  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:49.857413  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:49.892614  447486 cri.go:89] found id: ""
	I1030 19:49:49.892648  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.892660  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:49.892668  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:49.892732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:49.929835  447486 cri.go:89] found id: ""
	I1030 19:49:49.929862  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.929871  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:49.929878  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:49.929940  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:49.965341  447486 cri.go:89] found id: ""
	I1030 19:49:49.965371  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.965379  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:49.965392  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:49.965449  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:50.000134  447486 cri.go:89] found id: ""
	I1030 19:49:50.000165  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.000177  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:50.000188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:50.000259  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:50.033848  447486 cri.go:89] found id: ""
	I1030 19:49:50.033876  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.033885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:50.033891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:50.033943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:50.073315  447486 cri.go:89] found id: ""
	I1030 19:49:50.073344  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.073354  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:50.073360  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:50.073421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:50.114232  447486 cri.go:89] found id: ""
	I1030 19:49:50.114266  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.114277  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:50.114290  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:50.114311  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:50.185407  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:50.185434  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:50.185448  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:50.270447  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:50.270494  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:50.308825  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:50.308855  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:50.363376  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:50.363417  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:48.982027  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:51.482972  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.439801  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.939120  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.833062  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.833132  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.834352  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.878475  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:52.892013  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:52.892088  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:52.928085  447486 cri.go:89] found id: ""
	I1030 19:49:52.928117  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.928126  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:52.928132  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:52.928185  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:52.963377  447486 cri.go:89] found id: ""
	I1030 19:49:52.963413  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.963426  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:52.963434  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:52.963493  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:53.000799  447486 cri.go:89] found id: ""
	I1030 19:49:53.000825  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.000834  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:53.000840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:53.000912  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:53.037429  447486 cri.go:89] found id: ""
	I1030 19:49:53.037463  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.037472  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:53.037478  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:53.037534  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:53.072392  447486 cri.go:89] found id: ""
	I1030 19:49:53.072425  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.072433  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:53.072446  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:53.072520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:53.108925  447486 cri.go:89] found id: ""
	I1030 19:49:53.108957  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.108970  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:53.108978  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:53.109050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:53.145409  447486 cri.go:89] found id: ""
	I1030 19:49:53.145445  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.145457  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:53.145466  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:53.145536  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:53.180756  447486 cri.go:89] found id: ""
	I1030 19:49:53.180784  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.180793  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:53.180803  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:53.180817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:53.234960  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:53.235010  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:53.249224  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:53.249255  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:53.313223  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:53.313245  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:53.313264  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:53.399715  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:53.399758  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.944332  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:55.961546  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:55.961616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:56.020603  447486 cri.go:89] found id: ""
	I1030 19:49:56.020634  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.020647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:56.020654  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:56.020725  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:56.065134  447486 cri.go:89] found id: ""
	I1030 19:49:56.065162  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.065170  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:56.065176  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:56.065239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:56.101358  447486 cri.go:89] found id: ""
	I1030 19:49:56.101386  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.101396  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:56.101405  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:56.101473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:56.135762  447486 cri.go:89] found id: ""
	I1030 19:49:56.135795  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.135805  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:56.135811  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:56.135863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:56.171336  447486 cri.go:89] found id: ""
	I1030 19:49:56.171371  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.171383  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:56.171391  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:56.171461  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:56.205643  447486 cri.go:89] found id: ""
	I1030 19:49:56.205674  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.205685  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:56.205693  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:56.205759  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:56.240853  447486 cri.go:89] found id: ""
	I1030 19:49:56.240885  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.240894  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:56.240901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:56.240973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:56.276577  447486 cri.go:89] found id: ""
	I1030 19:49:56.276612  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.276623  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:56.276636  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:56.276651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:56.328180  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:56.328220  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:56.341895  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:56.341923  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:56.414492  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:56.414523  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:56.414540  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:56.498439  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:56.498498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:53.980916  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:55.983077  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:53.439070  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.940107  446887 pod_ready.go:82] duration metric: took 4m0.007533629s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:49:54.940137  446887 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:49:54.940149  446887 pod_ready.go:39] duration metric: took 4m6.552777198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:49:54.940170  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:49:54.940206  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:54.940264  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:54.992682  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:54.992715  446887 cri.go:89] found id: ""
	I1030 19:49:54.992727  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:54.992790  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:54.997251  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:54.997313  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:55.034504  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.034542  446887 cri.go:89] found id: ""
	I1030 19:49:55.034552  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:55.034616  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.039551  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:55.039624  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:55.083294  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.083326  446887 cri.go:89] found id: ""
	I1030 19:49:55.083336  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:55.083407  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.087866  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:55.087932  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:55.125250  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.125353  446887 cri.go:89] found id: ""
	I1030 19:49:55.125372  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:55.125446  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.130688  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:55.130747  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:55.168792  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.168814  446887 cri.go:89] found id: ""
	I1030 19:49:55.168822  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:55.168877  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.173360  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:55.173424  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:55.209566  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.209590  446887 cri.go:89] found id: ""
	I1030 19:49:55.209599  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:55.209659  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.214190  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:55.214263  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:55.257056  446887 cri.go:89] found id: ""
	I1030 19:49:55.257091  446887 logs.go:282] 0 containers: []
	W1030 19:49:55.257103  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:55.257111  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:55.257165  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:55.300194  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.300224  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.300229  446887 cri.go:89] found id: ""
	I1030 19:49:55.300238  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:55.300290  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.304750  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.309249  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:49:55.309276  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.363959  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:49:55.363994  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.412667  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:49:55.412703  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.455381  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:55.455420  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.494657  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:55.494689  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.552740  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:55.552773  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:55.627724  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:55.627765  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:55.642263  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:49:55.642300  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:55.691079  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:55.691111  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.730111  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:49:55.730151  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.785155  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:55.785189  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:55.924592  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:55.924633  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.970229  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:55.970267  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:57.333378  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.334394  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.039071  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.053648  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.053722  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.097620  447486 cri.go:89] found id: ""
	I1030 19:49:59.097650  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.097661  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:59.097669  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.097738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.139136  447486 cri.go:89] found id: ""
	I1030 19:49:59.139176  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.139188  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:59.139199  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.139270  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.180322  447486 cri.go:89] found id: ""
	I1030 19:49:59.180361  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.180371  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:59.180384  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.180453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.217374  447486 cri.go:89] found id: ""
	I1030 19:49:59.217422  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.217434  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:59.217443  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.217498  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.257857  447486 cri.go:89] found id: ""
	I1030 19:49:59.257884  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.257894  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:59.257901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.257968  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.297679  447486 cri.go:89] found id: ""
	I1030 19:49:59.297713  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.297724  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:59.297733  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.297795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.341469  447486 cri.go:89] found id: ""
	I1030 19:49:59.341499  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.341509  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.341517  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:59.341587  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:59.381677  447486 cri.go:89] found id: ""
	I1030 19:49:59.381704  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.381713  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:59.381723  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.381735  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.441396  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.441428  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.457105  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:59.457139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:59.532023  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:59.532051  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.532064  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:59.621685  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:59.621720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:58.481425  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:00.481912  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.482130  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.010542  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.027463  446887 api_server.go:72] duration metric: took 4m17.923507495s to wait for apiserver process to appear ...
	I1030 19:49:59.027488  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:49:59.027524  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.027571  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.066364  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:59.066391  446887 cri.go:89] found id: ""
	I1030 19:49:59.066401  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:59.066463  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.072454  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.072535  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.118043  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:59.118072  446887 cri.go:89] found id: ""
	I1030 19:49:59.118081  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:59.118142  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.122806  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.122883  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.167475  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:59.167500  446887 cri.go:89] found id: ""
	I1030 19:49:59.167511  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:59.167577  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.172181  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.172255  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.210384  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:59.210411  446887 cri.go:89] found id: ""
	I1030 19:49:59.210419  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:59.210473  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.216032  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.216114  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.269770  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.269791  446887 cri.go:89] found id: ""
	I1030 19:49:59.269799  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:59.269851  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.274161  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.274239  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.313907  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.313936  446887 cri.go:89] found id: ""
	I1030 19:49:59.313946  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:59.314019  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.320687  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.320766  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.367710  446887 cri.go:89] found id: ""
	I1030 19:49:59.367740  446887 logs.go:282] 0 containers: []
	W1030 19:49:59.367752  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.367759  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:59.367826  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:59.422716  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.422744  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.422750  446887 cri.go:89] found id: ""
	I1030 19:49:59.422763  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:59.422827  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.428399  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.432404  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:59.432429  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.475798  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.475839  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.548960  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.548998  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.566839  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:59.566870  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.606181  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:59.606210  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.670134  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:59.670170  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.709224  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.709253  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:00.132147  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:00.132194  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:00.181124  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:00.181171  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:00.306545  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:00.306585  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:00.352129  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:00.352169  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:00.398083  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:00.398119  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:00.439813  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:00.439851  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:02.978477  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:50:02.983776  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:50:02.984791  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:50:02.984814  446887 api_server.go:131] duration metric: took 3.957319689s to wait for apiserver health ...
	I1030 19:50:02.984822  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:50:02.984844  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.984902  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:03.024715  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:03.024745  446887 cri.go:89] found id: ""
	I1030 19:50:03.024754  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:50:03.024820  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.029121  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:03.029188  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:03.064462  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:03.064489  446887 cri.go:89] found id: ""
	I1030 19:50:03.064500  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:50:03.064564  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.068587  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:03.068665  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:03.106880  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.106902  446887 cri.go:89] found id: ""
	I1030 19:50:03.106910  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:50:03.106978  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.111313  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:03.111388  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:03.155761  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:03.155791  446887 cri.go:89] found id: ""
	I1030 19:50:03.155801  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:50:03.155864  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.160616  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:03.160686  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:03.199028  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:03.199063  446887 cri.go:89] found id: ""
	I1030 19:50:03.199074  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:50:03.199149  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.203348  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:03.203414  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:03.257739  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:03.257769  446887 cri.go:89] found id: ""
	I1030 19:50:03.257780  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:50:03.257845  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.263357  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:03.263417  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:03.309752  446887 cri.go:89] found id: ""
	I1030 19:50:03.309779  446887 logs.go:282] 0 containers: []
	W1030 19:50:03.309787  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:03.309793  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:50:03.309843  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:50:03.351570  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.351593  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.351597  446887 cri.go:89] found id: ""
	I1030 19:50:03.351605  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:50:03.351656  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.364414  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.369070  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:03.369097  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:03.385129  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:03.385161  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:01.833117  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:04.334645  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.170623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:02.184885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.184975  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:02.223811  447486 cri.go:89] found id: ""
	I1030 19:50:02.223841  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.223849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:02.223856  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:02.223908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:02.260454  447486 cri.go:89] found id: ""
	I1030 19:50:02.260481  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.260491  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:02.260497  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:02.260554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:02.296542  447486 cri.go:89] found id: ""
	I1030 19:50:02.296569  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.296577  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:02.296583  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:02.296631  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:02.332168  447486 cri.go:89] found id: ""
	I1030 19:50:02.332199  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.332211  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:02.332219  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:02.332287  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:02.366539  447486 cri.go:89] found id: ""
	I1030 19:50:02.366575  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.366586  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:02.366595  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:02.366659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:02.401859  447486 cri.go:89] found id: ""
	I1030 19:50:02.401894  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.401915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:02.401923  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:02.401991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:02.446061  447486 cri.go:89] found id: ""
	I1030 19:50:02.446097  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.446108  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:02.446116  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:02.446181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:02.488233  447486 cri.go:89] found id: ""
	I1030 19:50:02.488257  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.488265  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:02.488274  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:02.488294  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:02.544517  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:02.544554  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:02.558143  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:02.558179  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:02.628679  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:02.628706  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:02.628723  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:02.710246  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:02.710293  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.254846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:05.269536  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:05.269599  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:05.303724  447486 cri.go:89] found id: ""
	I1030 19:50:05.303753  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.303761  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:05.303767  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:05.303819  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:05.339268  447486 cri.go:89] found id: ""
	I1030 19:50:05.339301  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.339322  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:05.339330  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:05.339405  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:05.375892  447486 cri.go:89] found id: ""
	I1030 19:50:05.375923  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.375930  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:05.375936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:05.375988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:05.413197  447486 cri.go:89] found id: ""
	I1030 19:50:05.413232  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.413243  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:05.413252  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:05.413329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:05.452095  447486 cri.go:89] found id: ""
	I1030 19:50:05.452122  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.452130  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:05.452137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:05.452193  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:05.490694  447486 cri.go:89] found id: ""
	I1030 19:50:05.490731  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.490744  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:05.490753  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:05.490808  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:05.523961  447486 cri.go:89] found id: ""
	I1030 19:50:05.523992  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.524001  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:05.524008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:05.524060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:05.558631  447486 cri.go:89] found id: ""
	I1030 19:50:05.558664  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.558673  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:05.558684  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:05.558699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.596929  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:05.596958  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:05.647294  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:05.647332  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:05.661349  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:05.661377  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:05.730268  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:05.730299  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:05.730323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.434675  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:03.434708  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.474767  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:50:03.474803  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.510301  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:03.510331  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.887871  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:50:03.887912  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.930529  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:03.930563  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:03.971064  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:03.971102  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:04.040593  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:04.040632  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:04.157377  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:04.157418  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:04.205779  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:04.205816  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:04.251434  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:50:04.251470  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:04.288713  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:50:04.288747  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:06.849298  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:50:06.849329  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.849334  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.849340  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.849352  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.849358  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.849367  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.849373  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.849377  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.849384  446887 system_pods.go:74] duration metric: took 3.864557334s to wait for pod list to return data ...
	I1030 19:50:06.849394  446887 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:50:06.852015  446887 default_sa.go:45] found service account: "default"
	I1030 19:50:06.852037  446887 default_sa.go:55] duration metric: took 2.63686ms for default service account to be created ...
	I1030 19:50:06.852046  446887 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:50:06.856920  446887 system_pods.go:86] 8 kube-system pods found
	I1030 19:50:06.856945  446887 system_pods.go:89] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.856953  446887 system_pods.go:89] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.856959  446887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.856966  446887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.856972  446887 system_pods.go:89] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.856979  446887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.856996  446887 system_pods.go:89] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.857005  446887 system_pods.go:89] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.857015  446887 system_pods.go:126] duration metric: took 4.962745ms to wait for k8s-apps to be running ...
	I1030 19:50:06.857025  446887 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:50:06.857086  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:06.874176  446887 system_svc.go:56] duration metric: took 17.144628ms WaitForService to wait for kubelet
	I1030 19:50:06.874206  446887 kubeadm.go:582] duration metric: took 4m25.770253397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:50:06.874230  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:50:06.876962  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:50:06.876987  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:50:06.877004  446887 node_conditions.go:105] duration metric: took 2.768174ms to run NodePressure ...
	I1030 19:50:06.877025  446887 start.go:241] waiting for startup goroutines ...
	I1030 19:50:06.877034  446887 start.go:246] waiting for cluster config update ...
	I1030 19:50:06.877070  446887 start.go:255] writing updated cluster config ...
	I1030 19:50:06.877355  446887 ssh_runner.go:195] Run: rm -f paused
	I1030 19:50:06.927147  446887 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:50:06.929103  446887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-768989" cluster and "default" namespace by default
	I1030 19:50:04.981923  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.982630  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.834029  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.834616  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.312167  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:08.327121  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:08.327206  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:08.364871  447486 cri.go:89] found id: ""
	I1030 19:50:08.364905  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.364916  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:08.364924  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:08.364982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:08.399179  447486 cri.go:89] found id: ""
	I1030 19:50:08.399215  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.399225  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:08.399231  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:08.399286  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:08.434308  447486 cri.go:89] found id: ""
	I1030 19:50:08.434340  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.434350  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:08.434356  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:08.434409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:08.477152  447486 cri.go:89] found id: ""
	I1030 19:50:08.477184  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.477193  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:08.477204  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:08.477274  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:08.513678  447486 cri.go:89] found id: ""
	I1030 19:50:08.513706  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.513716  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:08.513725  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:08.513789  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:08.551427  447486 cri.go:89] found id: ""
	I1030 19:50:08.551459  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.551478  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:08.551485  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:08.551550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:08.584224  447486 cri.go:89] found id: ""
	I1030 19:50:08.584260  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.584272  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:08.584282  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:08.584351  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:08.617603  447486 cri.go:89] found id: ""
	I1030 19:50:08.617638  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.617649  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:08.617660  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:08.617674  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:08.694201  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:08.694229  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:08.694247  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.775457  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:08.775500  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:08.816452  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:08.816496  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:08.868077  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:08.868114  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.383130  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:11.397672  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:11.397758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:11.431923  447486 cri.go:89] found id: ""
	I1030 19:50:11.431959  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.431971  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:11.431980  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:11.432050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:11.466959  447486 cri.go:89] found id: ""
	I1030 19:50:11.466996  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.467009  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:11.467018  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:11.467093  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:11.506399  447486 cri.go:89] found id: ""
	I1030 19:50:11.506425  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.506437  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:11.506444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:11.506529  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:11.538606  447486 cri.go:89] found id: ""
	I1030 19:50:11.538635  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.538643  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:11.538649  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:11.538700  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:11.573265  447486 cri.go:89] found id: ""
	I1030 19:50:11.573296  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.573304  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:11.573310  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:11.573364  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:11.608522  447486 cri.go:89] found id: ""
	I1030 19:50:11.608549  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.608558  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:11.608569  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:11.608629  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:11.639758  447486 cri.go:89] found id: ""
	I1030 19:50:11.639784  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.639792  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:11.639797  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:11.639846  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:11.673381  447486 cri.go:89] found id: ""
	I1030 19:50:11.673414  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.673426  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:11.673439  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:11.673454  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:11.727368  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:11.727414  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.741267  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:11.741301  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:09.481159  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.483339  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.334468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:13.832615  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:50:11.808126  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:11.808158  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:11.808174  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:11.888676  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:11.888713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.431637  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:14.445315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:14.445392  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:14.482059  447486 cri.go:89] found id: ""
	I1030 19:50:14.482097  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.482110  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:14.482118  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:14.482186  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:14.520802  447486 cri.go:89] found id: ""
	I1030 19:50:14.520834  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.520843  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:14.520849  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:14.520900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:14.559965  447486 cri.go:89] found id: ""
	I1030 19:50:14.559996  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.560006  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:14.560012  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:14.560062  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:14.601831  447486 cri.go:89] found id: ""
	I1030 19:50:14.601865  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.601875  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:14.601881  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:14.601932  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:14.635307  447486 cri.go:89] found id: ""
	I1030 19:50:14.635339  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.635348  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:14.635355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:14.635418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:14.668618  447486 cri.go:89] found id: ""
	I1030 19:50:14.668648  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.668657  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:14.668664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:14.668726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:14.702597  447486 cri.go:89] found id: ""
	I1030 19:50:14.702633  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.702644  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:14.702653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:14.702715  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:14.736860  447486 cri.go:89] found id: ""
	I1030 19:50:14.736899  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.736911  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:14.736925  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:14.736942  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:14.822015  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:14.822060  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.860153  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:14.860195  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:14.912230  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:14.912269  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:14.927032  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:14.927067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:14.994401  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:13.975124  446965 pod_ready.go:82] duration metric: took 4m0.000158179s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	E1030 19:50:13.975173  446965 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" (will not retry!)
	I1030 19:50:13.975201  446965 pod_ready.go:39] duration metric: took 4m14.686087419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:13.975238  446965 kubeadm.go:597] duration metric: took 4m22.157012059s to restartPrimaryControlPlane
	W1030 19:50:13.975313  446965 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:13.975366  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:15.833986  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.835468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.494865  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:17.509934  447486 kubeadm.go:597] duration metric: took 4m3.074434895s to restartPrimaryControlPlane
	W1030 19:50:17.510016  447486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:17.510051  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:18.496415  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:18.512328  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:18.522293  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:18.532752  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:18.532772  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:18.532823  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:18.542501  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:18.542560  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:18.552660  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:18.562585  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:18.562649  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:18.572321  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.581633  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:18.581689  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.592770  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:18.602414  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:18.602477  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:18.612334  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:18.844753  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:20.333715  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:22.832817  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:24.833349  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:27.332723  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:29.335009  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:31.832584  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:33.834506  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:36.333902  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:38.833159  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:40.157555  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.182163055s)
	I1030 19:50:40.157637  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:40.174413  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:40.184817  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:40.195446  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:40.195475  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:40.195527  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:40.205509  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:40.205575  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:40.217343  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:40.227666  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:40.227729  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:40.237594  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.247151  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:40.247209  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.256854  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:40.266306  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:40.266379  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:40.276409  446965 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:40.322080  446965 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 19:50:40.322174  446965 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:50:40.433056  446965 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:50:40.433251  446965 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:50:40.433390  446965 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 19:50:40.445085  446965 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:50:40.447192  446965 out.go:235]   - Generating certificates and keys ...
	I1030 19:50:40.447301  446965 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:50:40.447395  446965 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:50:40.447512  446965 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:50:40.447600  446965 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:50:40.447735  446965 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:50:40.447825  446965 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:50:40.447912  446965 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:50:40.447999  446965 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:50:40.448108  446965 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:50:40.448208  446965 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:50:40.448266  446965 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:50:40.448345  446965 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:50:40.590735  446965 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:50:40.714139  446965 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 19:50:40.808334  446965 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:50:40.940687  446965 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:50:41.085266  446965 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:50:41.085840  446965 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:50:41.088415  446965 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:50:41.090229  446965 out.go:235]   - Booting up control plane ...
	I1030 19:50:41.090349  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:50:41.090466  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:50:41.090573  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:50:41.112262  446965 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:50:41.118809  446965 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:50:41.118919  446965 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:50:41.243915  446965 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 19:50:41.244093  446965 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 19:50:41.745362  446965 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.630697ms
	I1030 19:50:41.745513  446965 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 19:50:40.834005  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:42.834286  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:46.748431  446965 kubeadm.go:310] [api-check] The API server is healthy after 5.001587935s
	I1030 19:50:46.762271  446965 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 19:50:46.781785  446965 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 19:50:46.806338  446965 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 19:50:46.806613  446965 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-042402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 19:50:46.819762  446965 kubeadm.go:310] [bootstrap-token] Using token: k711fn.1we2gia9o31jm3ip
	I1030 19:50:46.821026  446965 out.go:235]   - Configuring RBAC rules ...
	I1030 19:50:46.821137  446965 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 19:50:46.827537  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 19:50:46.836653  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 19:50:46.844891  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 19:50:46.848423  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 19:50:46.851674  446965 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 19:50:47.157946  446965 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 19:50:47.615774  446965 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 19:50:48.154429  446965 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 19:50:48.159547  446965 kubeadm.go:310] 
	I1030 19:50:48.159636  446965 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 19:50:48.159648  446965 kubeadm.go:310] 
	I1030 19:50:48.159762  446965 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 19:50:48.159776  446965 kubeadm.go:310] 
	I1030 19:50:48.159806  446965 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 19:50:48.159880  446965 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 19:50:48.159934  446965 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 19:50:48.159944  446965 kubeadm.go:310] 
	I1030 19:50:48.160029  446965 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 19:50:48.160040  446965 kubeadm.go:310] 
	I1030 19:50:48.160123  446965 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 19:50:48.160154  446965 kubeadm.go:310] 
	I1030 19:50:48.160242  446965 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 19:50:48.160351  446965 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 19:50:48.160440  446965 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 19:50:48.160450  446965 kubeadm.go:310] 
	I1030 19:50:48.160570  446965 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 19:50:48.160652  446965 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 19:50:48.160660  446965 kubeadm.go:310] 
	I1030 19:50:48.160729  446965 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.160818  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 19:50:48.160838  446965 kubeadm.go:310] 	--control-plane 
	I1030 19:50:48.160846  446965 kubeadm.go:310] 
	I1030 19:50:48.160943  446965 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 19:50:48.160955  446965 kubeadm.go:310] 
	I1030 19:50:48.161065  446965 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.161205  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 19:50:48.162302  446965 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:48.162390  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:50:48.162408  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:50:48.164041  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:50:45.333255  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:47.334686  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:49.832993  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:48.165318  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:50:48.176702  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:50:48.199681  446965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:50:48.199776  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.199840  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-042402 minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=embed-certs-042402 minikube.k8s.io/primary=true
	I1030 19:50:48.226617  446965 ops.go:34] apiserver oom_adj: -16
	I1030 19:50:48.404620  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.905366  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.405663  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.904925  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.405082  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.905099  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.404860  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.905534  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.405432  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.905289  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:53.010770  446965 kubeadm.go:1113] duration metric: took 4.811061462s to wait for elevateKubeSystemPrivileges
	I1030 19:50:53.010818  446965 kubeadm.go:394] duration metric: took 5m1.251362756s to StartCluster
	I1030 19:50:53.010849  446965 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.010948  446965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:50:53.012997  446965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.013284  446965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:50:53.013411  446965 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:50:53.013518  446965 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-042402"
	I1030 19:50:53.013539  446965 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-042402"
	I1030 19:50:53.013539  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1030 19:50:53.013550  446965 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:50:53.013600  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013546  446965 addons.go:69] Setting default-storageclass=true in profile "embed-certs-042402"
	I1030 19:50:53.013605  446965 addons.go:69] Setting metrics-server=true in profile "embed-certs-042402"
	I1030 19:50:53.013635  446965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-042402"
	I1030 19:50:53.013642  446965 addons.go:234] Setting addon metrics-server=true in "embed-certs-042402"
	W1030 19:50:53.013650  446965 addons.go:243] addon metrics-server should already be in state true
	I1030 19:50:53.013675  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013947  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014005  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014010  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014022  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014058  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014112  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.015033  446965 out.go:177] * Verifying Kubernetes components...
	I1030 19:50:53.016527  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:50:53.030033  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I1030 19:50:53.030290  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1030 19:50:53.030618  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.030733  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.031192  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031209  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031342  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031356  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031577  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.031773  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.031801  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.032289  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1030 19:50:53.032910  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.032953  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.033170  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.033684  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.033699  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.035082  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.035104  446965 addons.go:234] Setting addon default-storageclass=true in "embed-certs-042402"
	W1030 19:50:53.035124  446965 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:50:53.035158  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.035461  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.035492  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.036666  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.036697  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.054685  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1030 19:50:53.055271  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.055621  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I1030 19:50:53.055762  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.055779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.056073  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.056192  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.056410  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.056665  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.056688  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.057099  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.057693  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.057741  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.058427  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.058756  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I1030 19:50:53.059684  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.060230  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.060253  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.060597  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.060806  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.060880  446965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:50:53.062367  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.062469  446965 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.062506  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:50:53.062526  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.063955  446965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:50:53.065131  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:50:53.065153  446965 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:50:53.065173  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.065987  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066607  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.066640  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066723  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.066956  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.067102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.067254  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.068475  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.068916  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.068939  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.069098  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.069288  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.069457  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.069625  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.075920  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1030 19:50:53.076341  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.076758  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.076779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.077042  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.077238  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.078809  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.079065  446965 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.079088  446965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:50:53.079105  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.081873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082309  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.082339  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082515  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.082705  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.082863  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.083061  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.274313  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:50:53.305281  446965 node_ready.go:35] waiting up to 6m0s for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313184  446965 node_ready.go:49] node "embed-certs-042402" has status "Ready":"True"
	I1030 19:50:53.313217  446965 node_ready.go:38] duration metric: took 7.892097ms for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313230  446965 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:53.321668  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:50:53.406960  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.427287  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:50:53.427324  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:50:53.475089  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.485983  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:50:53.486013  446965 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:50:53.570871  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:53.570904  446965 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:50:53.670898  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:54.545328  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.138329529s)
	I1030 19:50:54.545384  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545383  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.070259573s)
	I1030 19:50:54.545399  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545426  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545445  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545732  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545748  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545757  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545761  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545765  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545787  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545794  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545802  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545808  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.546139  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546162  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.546465  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.546468  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546507  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.576380  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.576408  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.576738  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.576787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.576804  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.703670  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032714873s)
	I1030 19:50:54.703724  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.703736  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704025  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.704059  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704076  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704085  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.704104  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704350  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704362  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704374  446965 addons.go:475] Verifying addon metrics-server=true in "embed-certs-042402"
	I1030 19:50:54.706330  446965 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:50:51.833654  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.333879  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.707723  446965 addons.go:510] duration metric: took 1.694322523s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:50:55.328470  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:57.828224  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:56.832967  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:58.833284  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:59.828636  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:01.828151  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.828178  446965 pod_ready.go:82] duration metric: took 8.506481998s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.828187  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833094  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.833121  446965 pod_ready.go:82] duration metric: took 4.926401ms for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833133  446965 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837391  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.837410  446965 pod_ready.go:82] duration metric: took 4.27047ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837419  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344200  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.344224  446965 pod_ready.go:82] duration metric: took 506.798667ms for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344233  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349020  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.349042  446965 pod_ready.go:82] duration metric: took 4.801739ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349055  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626109  446965 pod_ready.go:93] pod "kube-proxy-m9zwz" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.626137  446965 pod_ready.go:82] duration metric: took 277.074567ms for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626146  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027456  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:03.027482  446965 pod_ready.go:82] duration metric: took 401.329277ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027493  446965 pod_ready.go:39] duration metric: took 9.714247169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:03.027513  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:03.027579  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:03.043403  446965 api_server.go:72] duration metric: took 10.030078869s to wait for apiserver process to appear ...
	I1030 19:51:03.043431  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:03.043456  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:51:03.048722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:51:03.049572  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:03.049595  446965 api_server.go:131] duration metric: took 6.156928ms to wait for apiserver health ...
	I1030 19:51:03.049603  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:03.233170  446965 system_pods.go:59] 9 kube-system pods found
	I1030 19:51:03.233205  446965 system_pods.go:61] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.233212  446965 system_pods.go:61] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.233217  446965 system_pods.go:61] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.233222  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.233227  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.233231  446965 system_pods.go:61] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.233236  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.233247  446965 system_pods.go:61] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.233255  446965 system_pods.go:61] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.233272  446965 system_pods.go:74] duration metric: took 183.660307ms to wait for pod list to return data ...
	I1030 19:51:03.233287  446965 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:03.427520  446965 default_sa.go:45] found service account: "default"
	I1030 19:51:03.427550  446965 default_sa.go:55] duration metric: took 194.254547ms for default service account to be created ...
	I1030 19:51:03.427562  446965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:03.629316  446965 system_pods.go:86] 9 kube-system pods found
	I1030 19:51:03.629351  446965 system_pods.go:89] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.629364  446965 system_pods.go:89] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.629370  446965 system_pods.go:89] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.629377  446965 system_pods.go:89] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.629381  446965 system_pods.go:89] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.629386  446965 system_pods.go:89] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.629391  446965 system_pods.go:89] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.629399  446965 system_pods.go:89] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.629405  446965 system_pods.go:89] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.629418  446965 system_pods.go:126] duration metric: took 201.847233ms to wait for k8s-apps to be running ...
	I1030 19:51:03.629432  446965 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:03.629486  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:03.649120  446965 system_svc.go:56] duration metric: took 19.675022ms WaitForService to wait for kubelet
	I1030 19:51:03.649166  446965 kubeadm.go:582] duration metric: took 10.635844977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:03.649192  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:03.826763  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:03.826790  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:03.826803  446965 node_conditions.go:105] duration metric: took 177.604616ms to run NodePressure ...
	I1030 19:51:03.826819  446965 start.go:241] waiting for startup goroutines ...
	I1030 19:51:03.826827  446965 start.go:246] waiting for cluster config update ...
	I1030 19:51:03.826841  446965 start.go:255] writing updated cluster config ...
	I1030 19:51:03.827126  446965 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:03.877974  446965 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:03.880121  446965 out.go:177] * Done! kubectl is now configured to use "embed-certs-042402" cluster and "default" namespace by default
	I1030 19:51:00.833673  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:03.333042  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:05.333431  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:07.833229  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:09.833772  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:10.833131  446736 pod_ready.go:82] duration metric: took 4m0.006526983s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:51:10.833166  446736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:51:10.833178  446736 pod_ready.go:39] duration metric: took 4m7.416690025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:10.833200  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:10.833239  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:10.833300  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:10.884016  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:10.884046  446736 cri.go:89] found id: ""
	I1030 19:51:10.884055  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:10.884108  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.888789  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:10.888857  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:10.931994  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:10.932037  446736 cri.go:89] found id: ""
	I1030 19:51:10.932047  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:10.932097  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.937113  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:10.937181  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:10.977951  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:10.977982  446736 cri.go:89] found id: ""
	I1030 19:51:10.977993  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:10.978050  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.982791  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:10.982863  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:11.021741  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.021770  446736 cri.go:89] found id: ""
	I1030 19:51:11.021780  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:11.021837  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.026590  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:11.026653  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:11.068839  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.068873  446736 cri.go:89] found id: ""
	I1030 19:51:11.068885  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:11.068946  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.073103  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:11.073171  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:11.108404  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.108432  446736 cri.go:89] found id: ""
	I1030 19:51:11.108443  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:11.108506  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.112903  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:11.112974  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:11.153767  446736 cri.go:89] found id: ""
	I1030 19:51:11.153800  446736 logs.go:282] 0 containers: []
	W1030 19:51:11.153812  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:11.153821  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:11.153892  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:11.194649  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.194681  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.194687  446736 cri.go:89] found id: ""
	I1030 19:51:11.194697  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:11.194770  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.199037  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.202957  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:11.202984  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:11.246187  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:11.246220  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.286608  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:11.286643  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.339119  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:11.339157  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.376624  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:11.376653  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.411401  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:11.411431  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:11.481668  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:11.481710  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:11.497767  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:11.497799  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:11.612001  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:11.612034  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:11.656553  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:11.656589  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:11.695387  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:11.695428  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.732386  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:11.732419  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:12.217007  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:12.217056  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:14.769155  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:14.787096  446736 api_server.go:72] duration metric: took 4m17.097569041s to wait for apiserver process to appear ...
	I1030 19:51:14.787128  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:14.787176  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:14.787235  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:14.823506  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:14.823533  446736 cri.go:89] found id: ""
	I1030 19:51:14.823541  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:14.823595  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.828125  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:14.828214  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:14.867890  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:14.867914  446736 cri.go:89] found id: ""
	I1030 19:51:14.867922  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:14.867970  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.873213  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:14.873283  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:14.913068  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:14.913103  446736 cri.go:89] found id: ""
	I1030 19:51:14.913114  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:14.913179  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.918380  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:14.918459  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:14.956150  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:14.956177  446736 cri.go:89] found id: ""
	I1030 19:51:14.956187  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:14.956294  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.960781  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:14.960836  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:15.001804  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.001833  446736 cri.go:89] found id: ""
	I1030 19:51:15.001844  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:15.001893  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.006341  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:15.006401  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:15.045202  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.045236  446736 cri.go:89] found id: ""
	I1030 19:51:15.045247  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:15.045326  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.051967  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:15.052031  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:15.091569  446736 cri.go:89] found id: ""
	I1030 19:51:15.091596  446736 logs.go:282] 0 containers: []
	W1030 19:51:15.091604  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:15.091611  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:15.091668  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:15.135521  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:15.135551  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:15.135557  446736 cri.go:89] found id: ""
	I1030 19:51:15.135567  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:15.135633  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.140215  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.145490  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:15.145514  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:15.205939  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:15.205972  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:15.240157  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:15.240194  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.277168  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:15.277200  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:15.708451  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:15.708499  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:15.750544  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:15.750577  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:15.820071  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:15.820113  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:15.870259  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:15.870293  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:15.919968  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:15.919998  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.976948  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:15.976992  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:16.014451  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:16.014498  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:16.047766  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:16.047806  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:16.070539  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:16.070567  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:18.677834  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:51:18.682862  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:51:18.684023  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:18.684046  446736 api_server.go:131] duration metric: took 3.896911154s to wait for apiserver health ...
	I1030 19:51:18.684055  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:18.684083  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:18.684130  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:18.724815  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:18.724848  446736 cri.go:89] found id: ""
	I1030 19:51:18.724860  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:18.724928  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.729332  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:18.729391  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:18.767614  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:18.767642  446736 cri.go:89] found id: ""
	I1030 19:51:18.767651  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:18.767705  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.772420  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:18.772525  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:18.811459  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:18.811489  446736 cri.go:89] found id: ""
	I1030 19:51:18.811501  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:18.811563  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.816844  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:18.816906  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:18.853273  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:18.853299  446736 cri.go:89] found id: ""
	I1030 19:51:18.853308  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:18.853362  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.857867  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:18.857946  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:18.907021  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:18.907052  446736 cri.go:89] found id: ""
	I1030 19:51:18.907063  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:18.907126  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.913432  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:18.913506  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:18.978047  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:18.978072  446736 cri.go:89] found id: ""
	I1030 19:51:18.978083  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:18.978150  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.983158  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:18.983241  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:19.018992  446736 cri.go:89] found id: ""
	I1030 19:51:19.019018  446736 logs.go:282] 0 containers: []
	W1030 19:51:19.019026  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:19.019035  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:19.019094  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:19.053821  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.053850  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.053855  446736 cri.go:89] found id: ""
	I1030 19:51:19.053862  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:19.053922  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.063575  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.069254  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:19.069283  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:19.139641  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:19.139700  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:19.198020  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:19.198059  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:19.239685  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:19.239727  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:19.281510  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:19.281545  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.317842  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:19.317872  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:19.659645  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:19.659697  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:19.678087  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:19.678121  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:19.778504  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:19.778540  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:19.826520  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:19.826552  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:19.863959  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:19.864011  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:19.915777  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:19.915814  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.953036  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:19.953069  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:22.502129  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:51:22.502162  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.502167  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.502172  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.502175  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.502179  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.502182  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.502188  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.502193  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.502201  446736 system_pods.go:74] duration metric: took 3.818141259s to wait for pod list to return data ...
	I1030 19:51:22.502209  446736 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:22.504541  446736 default_sa.go:45] found service account: "default"
	I1030 19:51:22.504562  446736 default_sa.go:55] duration metric: took 2.346763ms for default service account to be created ...
	I1030 19:51:22.504570  446736 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:22.509016  446736 system_pods.go:86] 8 kube-system pods found
	I1030 19:51:22.509039  446736 system_pods.go:89] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.509044  446736 system_pods.go:89] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.509048  446736 system_pods.go:89] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.509052  446736 system_pods.go:89] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.509055  446736 system_pods.go:89] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.509058  446736 system_pods.go:89] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.509101  446736 system_pods.go:89] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.509112  446736 system_pods.go:89] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.509119  446736 system_pods.go:126] duration metric: took 4.544102ms to wait for k8s-apps to be running ...
	I1030 19:51:22.509125  446736 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:22.509172  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:22.524883  446736 system_svc.go:56] duration metric: took 15.747977ms WaitForService to wait for kubelet
	I1030 19:51:22.524906  446736 kubeadm.go:582] duration metric: took 4m24.835384605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:22.524929  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:22.528315  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:22.528334  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:22.528345  446736 node_conditions.go:105] duration metric: took 3.411421ms to run NodePressure ...
	I1030 19:51:22.528357  446736 start.go:241] waiting for startup goroutines ...
	I1030 19:51:22.528364  446736 start.go:246] waiting for cluster config update ...
	I1030 19:51:22.528374  446736 start.go:255] writing updated cluster config ...
	I1030 19:51:22.528621  446736 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:22.577143  446736 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:22.580061  446736 out.go:177] * Done! kubectl is now configured to use "no-preload-960512" cluster and "default" namespace by default
	I1030 19:52:15.582907  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:52:15.583009  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:52:15.584345  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:15.584419  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:15.584522  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:15.584659  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:15.584763  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:15.584827  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:15.586931  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:15.587016  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:15.587074  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:15.587145  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:15.587198  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:15.587271  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:15.587339  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:15.587402  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:15.587455  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:15.587517  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:15.587577  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:15.587608  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:15.587682  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:15.587759  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:15.587846  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:15.587924  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:15.587988  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:15.588076  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:15.588148  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:15.588180  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:15.588267  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:15.589722  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:15.589834  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:15.589932  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:15.590014  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:15.590128  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:15.590285  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:15.590336  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:15.590388  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590560  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590642  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590842  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590946  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591155  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591253  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591513  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591609  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591841  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591855  447486 kubeadm.go:310] 
	I1030 19:52:15.591900  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:52:15.591956  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:52:15.591966  447486 kubeadm.go:310] 
	I1030 19:52:15.592008  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:52:15.592051  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:52:15.592192  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:52:15.592204  447486 kubeadm.go:310] 
	I1030 19:52:15.592318  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:52:15.592360  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:52:15.592391  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:52:15.592397  447486 kubeadm.go:310] 
	I1030 19:52:15.592511  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:52:15.592592  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:52:15.592600  447486 kubeadm.go:310] 
	I1030 19:52:15.592733  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:52:15.592850  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:52:15.592959  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:52:15.593059  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:52:15.593138  447486 kubeadm.go:310] 
	W1030 19:52:15.593236  447486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:52:15.593289  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:52:16.049810  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:52:16.065820  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:52:16.076166  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:52:16.076192  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:52:16.076241  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:52:16.085309  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:52:16.085380  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:52:16.094868  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:52:16.104343  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:52:16.104395  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:52:16.113939  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.122836  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:52:16.122885  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.132083  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:52:16.141441  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:52:16.141487  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:52:16.150710  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:52:16.222070  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:16.222183  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:16.366061  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:16.366194  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:16.366352  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:16.541086  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:16.543200  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:16.543303  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:16.543398  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:16.543523  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:16.543625  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:16.543749  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:16.543848  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:16.543942  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:16.544020  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:16.544096  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:16.544193  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:16.544252  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:16.544343  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:16.637454  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:16.829430  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:16.985259  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:17.072312  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:17.092511  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:17.093595  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:17.093654  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:17.228039  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:17.229647  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:17.229766  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:17.237333  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:17.239644  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:17.239774  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:17.241037  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:57.243167  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:57.243769  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:57.244072  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:02.244240  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:02.244563  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:12.244991  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:12.245293  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:32.246428  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:32.246697  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.247834  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:54:12.248150  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.248173  447486 kubeadm.go:310] 
	I1030 19:54:12.248226  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:54:12.248308  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:54:12.248336  447486 kubeadm.go:310] 
	I1030 19:54:12.248386  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:54:12.248449  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:54:12.248598  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:54:12.248609  447486 kubeadm.go:310] 
	I1030 19:54:12.248747  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:54:12.248811  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:54:12.248867  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:54:12.248876  447486 kubeadm.go:310] 
	I1030 19:54:12.249013  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:54:12.249111  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:54:12.249129  447486 kubeadm.go:310] 
	I1030 19:54:12.249280  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:54:12.249447  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:54:12.249564  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:54:12.249662  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:54:12.249708  447486 kubeadm.go:310] 
	I1030 19:54:12.249878  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:54:12.250015  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:54:12.250208  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:54:12.250221  447486 kubeadm.go:394] duration metric: took 7m57.874179721s to StartCluster
	I1030 19:54:12.250311  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:54:12.250399  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:54:12.292692  447486 cri.go:89] found id: ""
	I1030 19:54:12.292749  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.292760  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:54:12.292770  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:54:12.292840  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:54:12.329792  447486 cri.go:89] found id: ""
	I1030 19:54:12.329825  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.329835  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:54:12.329843  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:54:12.329905  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:54:12.364661  447486 cri.go:89] found id: ""
	I1030 19:54:12.364693  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.364702  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:54:12.364709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:54:12.364764  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:54:12.400842  447486 cri.go:89] found id: ""
	I1030 19:54:12.400870  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.400878  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:54:12.400885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:54:12.400943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:54:12.440135  447486 cri.go:89] found id: ""
	I1030 19:54:12.440164  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.440172  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:54:12.440178  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:54:12.440228  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:54:12.476365  447486 cri.go:89] found id: ""
	I1030 19:54:12.476403  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.476416  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:54:12.476425  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:54:12.476503  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:54:12.519669  447486 cri.go:89] found id: ""
	I1030 19:54:12.519702  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.519715  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:54:12.519724  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:54:12.519791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:54:12.554180  447486 cri.go:89] found id: ""
	I1030 19:54:12.554218  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.554230  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:54:12.554244  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:54:12.554261  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:54:12.669617  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:54:12.669660  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:54:12.708361  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:54:12.708392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:54:12.763103  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:54:12.763145  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:54:12.778676  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:54:12.778712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:54:12.865694  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1030 19:54:12.865732  447486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:54:12.865797  447486 out.go:270] * 
	W1030 19:54:12.865908  447486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.865929  447486 out.go:270] * 
	W1030 19:54:12.867124  447486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:54:12.871111  447486 out.go:201] 
	W1030 19:54:12.872534  447486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.872591  447486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:54:12.872616  447486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:54:12.874145  447486 out.go:201] 
	
	
	==> CRI-O <==
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.641475184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318424641452199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc3c1f64-7b3f-48be-a347-d48b898c1f08 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.642078288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb321376-adea-4404-a285-bba4956e148d name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.642152789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb321376-adea-4404-a285-bba4956e148d name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.642520295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb321376-adea-4404-a285-bba4956e148d name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.685943679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00fd7b15-af40-410a-b276-04b8b56b7a03 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.686033726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00fd7b15-af40-410a-b276-04b8b56b7a03 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.687841200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96137b33-241d-4899-ba9c-25c1574ebecb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.688184535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318424688151119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96137b33-241d-4899-ba9c-25c1574ebecb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.688881758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d48ab718-68f9-4781-9e78-4a26a5570778 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.688943225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d48ab718-68f9-4781-9e78-4a26a5570778 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.689590413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d48ab718-68f9-4781-9e78-4a26a5570778 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.729465593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b2cbf6b-9398-4923-94aa-7dfdc60df12b name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.729534314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b2cbf6b-9398-4923-94aa-7dfdc60df12b name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.730691535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa225831-4a5d-4a9e-9600-33ad01a3c913 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.731034252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318424731012205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa225831-4a5d-4a9e-9600-33ad01a3c913 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.731558230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dd6a982-257f-4211-9938-eff98bcf10bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.731612316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dd6a982-257f-4211-9938-eff98bcf10bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.731818942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dd6a982-257f-4211-9938-eff98bcf10bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.764534023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcfb6b00-e686-4a0c-a29f-2ce32075eea8 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.764605384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcfb6b00-e686-4a0c-a29f-2ce32075eea8 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.765924428Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df7a2e20-60ff-440b-867c-d93726e9ca68 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.766592533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318424766556420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df7a2e20-60ff-440b-867c-d93726e9ca68 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.767230576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abd871ea-537f-47cf-8c87-83ef04fcd753 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.767347046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abd871ea-537f-47cf-8c87-83ef04fcd753 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:00:24 no-preload-960512 crio[721]: time="2024-10-30 20:00:24.767552369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abd871ea-537f-47cf-8c87-83ef04fcd753 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	822348d485756       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   240ad66de29e4       storage-provisioner
	1b9bfc1573170       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   905e2e4bccb1e       coredns-7c65d6cfc9-6cdl4
	0a35c00abc76a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   bc2686b87bfb0       busybox
	0621c8e7bb77b       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   11e7842c569eb       kube-proxy-fxqqc
	de9271f5ab996       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   240ad66de29e4       storage-provisioner
	2873bfc8ed2a7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   7b5242e0110ef       kube-scheduler-no-preload-960512
	cf0541a4e5844       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   407dfa6f98c77       kube-controller-manager-no-preload-960512
	ace7f40d51794       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   e8f26a4bb41da       etcd-no-preload-960512
	990c5503542eb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   7e4eee7aa27bc       kube-apiserver-no-preload-960512
	
	
	==> coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51944 - 62700 "HINFO IN 7475402381862816469.7922778664981946274. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009720795s
	
	
	==> describe nodes <==
	Name:               no-preload-960512
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-960512
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=no-preload-960512
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T19_37_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 19:36:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-960512
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 20:00:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 19:57:35 +0000   Wed, 30 Oct 2024 19:36:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 19:57:35 +0000   Wed, 30 Oct 2024 19:36:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 19:57:35 +0000   Wed, 30 Oct 2024 19:36:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 19:57:35 +0000   Wed, 30 Oct 2024 19:47:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.132
	  Hostname:    no-preload-960512
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe7534de72464b218fe452cd800b546e
	  System UUID:                fe7534de-7246-4b21-8fe4-52cd800b546e
	  Boot ID:                    d13e56f1-b6ef-459e-b3be-c1a3c1051072
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-6cdl4                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-no-preload-960512                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kube-apiserver-no-preload-960512             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-no-preload-960512    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-fxqqc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-no-preload-960512             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-6867b74b74-72bb5              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-960512 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-960512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-960512 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node no-preload-960512 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-960512 event: Registered Node no-preload-960512 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-960512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-960512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-960512 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-960512 event: Registered Node no-preload-960512 in Controller
	
	
	==> dmesg <==
	[Oct30 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054950] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048523] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.163341] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.696109] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607546] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.883497] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.066386] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062674] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.177760] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.132878] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.288241] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[ +16.140969] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.061533] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.461584] systemd-fstab-generator[1445]: Ignoring "noauto" option for root device
	[  +4.562829] kauditd_printk_skb: 94 callbacks suppressed
	[  +4.437269] systemd-fstab-generator[2069]: Ignoring "noauto" option for root device
	[Oct30 19:47] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.807478] kauditd_printk_skb: 18 callbacks suppressed
	[ +17.476641] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] <==
	{"level":"info","ts":"2024-10-30T19:46:50.470593Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:46:50.479787Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-30T19:46:50.482580Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a7da7c7e26779cb7","initial-advertise-peer-urls":["https://192.168.72.132:2380"],"listen-peer-urls":["https://192.168.72.132:2380"],"advertise-client-urls":["https://192.168.72.132:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.132:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-30T19:46:50.484313Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-30T19:46:50.484598Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.132:2380"}
	{"level":"info","ts":"2024-10-30T19:46:50.484657Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.132:2380"}
	{"level":"info","ts":"2024-10-30T19:46:51.368158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-30T19:46:51.369365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-30T19:46:51.369438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 received MsgPreVoteResp from a7da7c7e26779cb7 at term 2"}
	{"level":"info","ts":"2024-10-30T19:46:51.369457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 became candidate at term 3"}
	{"level":"info","ts":"2024-10-30T19:46:51.369466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 received MsgVoteResp from a7da7c7e26779cb7 at term 3"}
	{"level":"info","ts":"2024-10-30T19:46:51.369478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 became leader at term 3"}
	{"level":"info","ts":"2024-10-30T19:46:51.369516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a7da7c7e26779cb7 elected leader a7da7c7e26779cb7 at term 3"}
	{"level":"info","ts":"2024-10-30T19:46:51.381499Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a7da7c7e26779cb7","local-member-attributes":"{Name:no-preload-960512 ClientURLs:[https://192.168.72.132:2379]}","request-path":"/0/members/a7da7c7e26779cb7/attributes","cluster-id":"146bd9643c3d2907","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-30T19:46:51.381703Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:46:51.382205Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:46:51.383458Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:46:51.384685Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.132:2379"}
	{"level":"info","ts":"2024-10-30T19:46:51.385620Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:46:51.386850Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-30T19:46:51.386956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-30T19:46:51.387000Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-30T19:56:51.417802Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-10-30T19:56:51.429602Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":849,"took":"10.913951ms","hash":1616802537,"current-db-size-bytes":2805760,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2805760,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-10-30T19:56:51.429726Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1616802537,"revision":849,"compact-revision":-1}
	
	
	==> kernel <==
	 20:00:25 up 14 min,  0 users,  load average: 0.12, 0.22, 0.18
	Linux no-preload-960512 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] <==
	E1030 19:56:53.832732       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1030 19:56:53.832647       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 19:56:53.833883       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:56:53.833893       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 19:57:53.835299       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:57:53.835607       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 19:57:53.835456       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:57:53.835702       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 19:57:53.836916       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:57:53.836973       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 19:59:53.837932       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:59:53.838370       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1030 19:59:53.838533       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 19:59:53.838705       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1030 19:59:53.839569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 19:59:53.840644       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] <==
	E1030 19:54:58.388418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:54:58.884187       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:55:28.395742       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:55:28.892667       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:55:58.401846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:55:58.899531       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:56:28.407550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:56:28.908609       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:56:58.414700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:56:58.917704       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:57:28.420465       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:57:28.925546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 19:57:35.799608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-960512"
	E1030 19:57:58.427391       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:57:58.932509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 19:58:08.047405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="365.066µs"
	I1030 19:58:20.048347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="230.123µs"
	E1030 19:58:28.434116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:58:28.940659       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:58:58.440380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:58:58.948083       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:59:28.447509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:59:28.955569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 19:59:58.453944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 19:59:58.963129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 19:46:53.633943       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 19:46:53.654945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.132"]
	E1030 19:46:53.655073       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 19:46:53.698928       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 19:46:53.699162       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 19:46:53.699320       1 server_linux.go:169] "Using iptables Proxier"
	I1030 19:46:53.704740       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 19:46:53.705370       1 server.go:483] "Version info" version="v1.31.2"
	I1030 19:46:53.705466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:46:53.710776       1 config.go:199] "Starting service config controller"
	I1030 19:46:53.712231       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 19:46:53.712650       1 config.go:105] "Starting endpoint slice config controller"
	I1030 19:46:53.714320       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 19:46:53.716750       1 config.go:328] "Starting node config controller"
	I1030 19:46:53.716802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 19:46:53.812714       1 shared_informer.go:320] Caches are synced for service config
	I1030 19:46:53.815063       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 19:46:53.817200       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] <==
	I1030 19:46:50.740498       1 serving.go:386] Generated self-signed cert in-memory
	W1030 19:46:52.807783       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1030 19:46:52.807825       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 19:46:52.807835       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1030 19:46:52.807842       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1030 19:46:52.848299       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1030 19:46:52.848346       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:46:52.853790       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1030 19:46:52.853829       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 19:46:52.858413       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1030 19:46:52.858482       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1030 19:46:52.961345       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 19:59:10 no-preload-960512 kubelet[1452]: E1030 19:59:10.033103    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 19:59:19 no-preload-960512 kubelet[1452]: E1030 19:59:19.230571    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318359229844820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:19 no-preload-960512 kubelet[1452]: E1030 19:59:19.231099    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318359229844820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:23 no-preload-960512 kubelet[1452]: E1030 19:59:23.033683    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 19:59:29 no-preload-960512 kubelet[1452]: E1030 19:59:29.233460    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318369232806497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:29 no-preload-960512 kubelet[1452]: E1030 19:59:29.233769    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318369232806497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:34 no-preload-960512 kubelet[1452]: E1030 19:59:34.034411    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 19:59:39 no-preload-960512 kubelet[1452]: E1030 19:59:39.236034    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318379235630199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:39 no-preload-960512 kubelet[1452]: E1030 19:59:39.236458    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318379235630199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:46 no-preload-960512 kubelet[1452]: E1030 19:59:46.033061    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 19:59:49 no-preload-960512 kubelet[1452]: E1030 19:59:49.054823    1452 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 19:59:49 no-preload-960512 kubelet[1452]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 19:59:49 no-preload-960512 kubelet[1452]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 19:59:49 no-preload-960512 kubelet[1452]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 19:59:49 no-preload-960512 kubelet[1452]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 19:59:49 no-preload-960512 kubelet[1452]: E1030 19:59:49.238656    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318389238375962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:49 no-preload-960512 kubelet[1452]: E1030 19:59:49.238709    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318389238375962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:59 no-preload-960512 kubelet[1452]: E1030 19:59:59.035111    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 19:59:59 no-preload-960512 kubelet[1452]: E1030 19:59:59.240088    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318399239768370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 19:59:59 no-preload-960512 kubelet[1452]: E1030 19:59:59.240195    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318399239768370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:00:09 no-preload-960512 kubelet[1452]: E1030 20:00:09.241970    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318409241645191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:00:09 no-preload-960512 kubelet[1452]: E1030 20:00:09.242012    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318409241645191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:00:13 no-preload-960512 kubelet[1452]: E1030 20:00:13.033664    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 20:00:19 no-preload-960512 kubelet[1452]: E1030 20:00:19.243683    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318419243206125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:00:19 no-preload-960512 kubelet[1452]: E1030 20:00:19.244097    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318419243206125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] <==
	I1030 19:47:24.299554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 19:47:24.309844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 19:47:24.310013       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 19:47:41.710140       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 19:47:41.711163       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-960512_570a8d87-8418-49a7-89ff-429e5c4b3784!
	I1030 19:47:41.712008       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95c2a27c-9451-419b-a29d-15ba5e8662e0", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-960512_570a8d87-8418-49a7-89ff-429e5c4b3784 became leader
	I1030 19:47:41.812167       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-960512_570a8d87-8418-49a7-89ff-429e5c4b3784!
	
	
	==> storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] <==
	I1030 19:46:53.475752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1030 19:47:23.481594       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-960512 -n no-preload-960512
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-960512 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-72bb5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-960512 describe pod metrics-server-6867b74b74-72bb5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-960512 describe pod metrics-server-6867b74b74-72bb5: exit status 1 (64.482393ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-72bb5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-960512 describe pod metrics-server-6867b74b74-72bb5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:54:45.773891  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:55:15.579622  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:55:18.708754  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:55:21.856093  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:55:43.958813  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:56:08.837603  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:56:11.429422  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:56:44.920591  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:56:57.604303  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:57:07.021226  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:57:33.737616  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:57:34.495768  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:58:17.243425  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:58:21.792032  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:58:52.514414  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 19:59:45.773785  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:00:18.709687  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:00:21.856479  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:00:43.959648  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:01:11.429212  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:01:57.604667  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:02:33.736680  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (238.139135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-516975" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (233.051308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-516975 logs -n 25
E1030 20:03:17.243191  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-516975 logs -n 25: (1.574590185s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-534248 sudo cat                              | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo find                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo crio                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-534248                                       | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:42:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:42:11.799298  447486 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:42:11.799434  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799444  447486 out.go:358] Setting ErrFile to fd 2...
	I1030 19:42:11.799448  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799628  447486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:42:11.800193  447486 out.go:352] Setting JSON to false
	I1030 19:42:11.801205  447486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12275,"bootTime":1730305057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:42:11.801318  447486 start.go:139] virtualization: kvm guest
	I1030 19:42:11.803677  447486 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:42:11.805274  447486 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:42:11.805300  447486 notify.go:220] Checking for updates...
	I1030 19:42:11.808043  447486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:42:11.809440  447486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:42:11.810604  447486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:42:11.811774  447486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:42:11.812958  447486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:42:11.814552  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:42:11.814994  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.815077  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.830315  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1030 19:42:11.830795  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.831345  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.831365  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.831692  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.831869  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.833718  447486 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:42:11.835019  447486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:42:11.835371  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.835416  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.850097  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1030 19:42:11.850532  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.850964  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.850978  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.851321  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.851541  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.886920  447486 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:42:11.888376  447486 start.go:297] selected driver: kvm2
	I1030 19:42:11.888392  447486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.888538  447486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:42:11.889472  447486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.889560  447486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:42:11.904007  447486 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:42:11.904405  447486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:42:11.904443  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:42:11.904494  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:42:11.904549  447486 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.904661  447486 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.907302  447486 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:42:10.622770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:11.908430  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:42:11.908474  447486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:42:11.908485  447486 cache.go:56] Caching tarball of preloaded images
	I1030 19:42:11.908564  447486 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:42:11.908575  447486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:42:11.908666  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:42:11.908832  447486 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:42:16.702732  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:19.774825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:25.854777  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:28.926846  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:35.006934  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:38.078752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:44.158848  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:47.230843  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:53.310763  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:56.382772  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:02.462818  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:05.534754  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:11.614801  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:14.686762  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:20.766767  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:23.838853  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:29.918782  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:32.990752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:39.070771  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:42.142716  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:48.222814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:51.294775  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:57.374780  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:00.446825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:06.526810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:09.598813  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:15.678770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:18.750751  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:24.830814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:27.902810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:33.982759  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:37.054791  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:43.134706  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:46.206802  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:52.286830  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:55.358809  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:01.438753  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:04.510854  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:07.515699  446887 start.go:364] duration metric: took 4m29.000646378s to acquireMachinesLock for "default-k8s-diff-port-768989"
	I1030 19:45:07.515764  446887 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:07.515773  446887 fix.go:54] fixHost starting: 
	I1030 19:45:07.516191  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:07.516238  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:07.532374  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I1030 19:45:07.532907  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:07.533433  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:07.533459  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:07.533790  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:07.534016  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:07.534220  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:07.535802  446887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-768989: state=Stopped err=<nil>
	I1030 19:45:07.535842  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	W1030 19:45:07.536016  446887 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:07.537809  446887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-768989" ...
	I1030 19:45:07.539184  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Start
	I1030 19:45:07.539361  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring networks are active...
	I1030 19:45:07.540025  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network default is active
	I1030 19:45:07.540408  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network mk-default-k8s-diff-port-768989 is active
	I1030 19:45:07.540867  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Getting domain xml...
	I1030 19:45:07.541489  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Creating domain...
	I1030 19:45:07.512810  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:07.512848  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513191  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:45:07.513223  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513458  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:45:07.515538  446736 machine.go:96] duration metric: took 4m37.420773403s to provisionDockerMachine
	I1030 19:45:07.515594  446736 fix.go:56] duration metric: took 4m37.443968478s for fixHost
	I1030 19:45:07.515600  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 4m37.443992524s
	W1030 19:45:07.515625  446736 start.go:714] error starting host: provision: host is not running
	W1030 19:45:07.515753  446736 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1030 19:45:07.515763  446736 start.go:729] Will try again in 5 seconds ...
	I1030 19:45:08.756310  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting to get IP...
	I1030 19:45:08.757242  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757624  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757747  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.757629  448092 retry.go:31] will retry after 202.103853ms: waiting for machine to come up
	I1030 19:45:08.961147  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961660  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961685  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.961606  448092 retry.go:31] will retry after 243.456761ms: waiting for machine to come up
	I1030 19:45:09.207134  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207539  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207582  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.207493  448092 retry.go:31] will retry after 375.017051ms: waiting for machine to come up
	I1030 19:45:09.584058  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584428  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.584373  448092 retry.go:31] will retry after 552.476692ms: waiting for machine to come up
	I1030 19:45:10.137989  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138421  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.138358  448092 retry.go:31] will retry after 560.865483ms: waiting for machine to come up
	I1030 19:45:10.700603  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700968  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.700920  448092 retry.go:31] will retry after 680.400693ms: waiting for machine to come up
	I1030 19:45:11.382861  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383336  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383362  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:11.383274  448092 retry.go:31] will retry after 787.136113ms: waiting for machine to come up
	I1030 19:45:12.171550  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171910  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171938  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:12.171853  448092 retry.go:31] will retry after 1.176474969s: waiting for machine to come up
	I1030 19:45:13.349617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350080  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350114  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:13.350042  448092 retry.go:31] will retry after 1.211573437s: waiting for machine to come up
	I1030 19:45:12.517265  446736 start.go:360] acquireMachinesLock for no-preload-960512: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:45:14.563397  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563805  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:14.563749  448092 retry.go:31] will retry after 1.625938777s: waiting for machine to come up
	I1030 19:45:16.191798  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192226  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192255  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:16.192188  448092 retry.go:31] will retry after 2.442949682s: waiting for machine to come up
	I1030 19:45:18.636342  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636768  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636812  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:18.636748  448092 retry.go:31] will retry after 2.48415211s: waiting for machine to come up
	I1030 19:45:21.124407  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124892  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124919  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:21.124843  448092 retry.go:31] will retry after 3.392637796s: waiting for machine to come up
	I1030 19:45:25.815539  446965 start.go:364] duration metric: took 4m42.694254153s to acquireMachinesLock for "embed-certs-042402"
	I1030 19:45:25.815623  446965 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:25.815635  446965 fix.go:54] fixHost starting: 
	I1030 19:45:25.816068  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:25.816232  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:25.833218  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 19:45:25.833610  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:25.834159  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:45:25.834191  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:25.834567  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:25.834777  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:25.834920  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:45:25.836507  446965 fix.go:112] recreateIfNeeded on embed-certs-042402: state=Stopped err=<nil>
	I1030 19:45:25.836532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	W1030 19:45:25.836711  446965 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:25.839078  446965 out.go:177] * Restarting existing kvm2 VM for "embed-certs-042402" ...
	I1030 19:45:24.519725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520072  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Found IP for machine: 192.168.39.92
	I1030 19:45:24.520091  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserving static IP address...
	I1030 19:45:24.520113  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has current primary IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520507  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.520521  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserved static IP address: 192.168.39.92
	I1030 19:45:24.520535  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | skip adding static IP to network mk-default-k8s-diff-port-768989 - found existing host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"}
	I1030 19:45:24.520545  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for SSH to be available...
	I1030 19:45:24.520560  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Getting to WaitForSSH function...
	I1030 19:45:24.522776  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523095  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.523127  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523209  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH client type: external
	I1030 19:45:24.523229  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa (-rw-------)
	I1030 19:45:24.523262  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:24.523283  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | About to run SSH command:
	I1030 19:45:24.523298  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | exit 0
	I1030 19:45:24.646297  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:24.646826  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetConfigRaw
	I1030 19:45:24.647589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:24.650093  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650532  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.650564  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650790  446887 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/config.json ...
	I1030 19:45:24.650984  446887 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:24.651005  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:24.651232  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.653396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653751  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.653781  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.654084  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654263  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.654677  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.654922  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.654935  446887 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:24.762586  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:24.762621  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.762898  446887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-768989"
	I1030 19:45:24.762936  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.763250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.765937  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766265  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.766289  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766398  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.766599  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766762  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766920  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.767087  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.767257  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.767269  446887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-768989 && echo "default-k8s-diff-port-768989" | sudo tee /etc/hostname
	I1030 19:45:24.888742  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-768989
	
	I1030 19:45:24.888771  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.891326  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891638  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.891691  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891804  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.892018  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892154  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892281  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.892498  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.892692  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.892716  446887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-768989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-768989/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-768989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:25.012173  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:25.012214  446887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:25.012240  446887 buildroot.go:174] setting up certificates
	I1030 19:45:25.012250  446887 provision.go:84] configureAuth start
	I1030 19:45:25.012280  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:25.012598  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.015106  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015430  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.015458  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.017810  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018099  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.018136  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018230  446887 provision.go:143] copyHostCerts
	I1030 19:45:25.018322  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:25.018334  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:25.018401  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:25.018553  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:25.018566  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:25.018634  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:25.018716  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:25.018724  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:25.018748  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:25.018798  446887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-768989 san=[127.0.0.1 192.168.39.92 default-k8s-diff-port-768989 localhost minikube]
	I1030 19:45:25.188186  446887 provision.go:177] copyRemoteCerts
	I1030 19:45:25.188246  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:25.188285  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.190995  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.191344  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191525  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.191718  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.191875  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.191991  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.277273  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1030 19:45:25.300302  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:45:25.322919  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:25.347214  446887 provision.go:87] duration metric: took 334.947897ms to configureAuth
	I1030 19:45:25.347246  446887 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:25.347432  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:25.347510  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.349988  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350294  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.350324  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350500  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.350704  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.350836  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.351015  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.351210  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.351421  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.351436  446887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:25.576481  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:25.576509  446887 machine.go:96] duration metric: took 925.509257ms to provisionDockerMachine
	I1030 19:45:25.576525  446887 start.go:293] postStartSetup for "default-k8s-diff-port-768989" (driver="kvm2")
	I1030 19:45:25.576562  446887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:25.576589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.576923  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:25.576951  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.579498  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579825  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.579841  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579980  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.580151  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.580320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.580453  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.665032  446887 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:25.669402  446887 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:25.669430  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:25.669500  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:25.669573  446887 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:25.669665  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:25.679070  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:25.703131  446887 start.go:296] duration metric: took 126.586543ms for postStartSetup
	I1030 19:45:25.703194  446887 fix.go:56] duration metric: took 18.187420989s for fixHost
	I1030 19:45:25.703217  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.705911  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706365  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.706396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706609  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.706800  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.706944  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.707052  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.707188  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.707428  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.707443  446887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:25.815370  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317525.786848764
	
	I1030 19:45:25.815406  446887 fix.go:216] guest clock: 1730317525.786848764
	I1030 19:45:25.815414  446887 fix.go:229] Guest: 2024-10-30 19:45:25.786848764 +0000 UTC Remote: 2024-10-30 19:45:25.703198163 +0000 UTC m=+287.327380555 (delta=83.650601ms)
	I1030 19:45:25.815439  446887 fix.go:200] guest clock delta is within tolerance: 83.650601ms
	I1030 19:45:25.815445  446887 start.go:83] releasing machines lock for "default-k8s-diff-port-768989", held for 18.299702226s
	I1030 19:45:25.815467  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.815737  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.818508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818851  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.818889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818987  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819477  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819671  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819808  446887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:25.819862  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.819900  446887 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:25.819930  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.822372  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.822754  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822774  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822887  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823109  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.823168  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.823330  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823429  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823506  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.823605  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823758  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823880  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.903488  446887 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:25.931046  446887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:26.077178  446887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:26.084282  446887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:26.084358  446887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:26.100869  446887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:26.100893  446887 start.go:495] detecting cgroup driver to use...
	I1030 19:45:26.100984  446887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:26.117006  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:26.130102  446887 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:26.130184  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:26.148540  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:26.163003  446887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:26.286433  446887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:26.444862  446887 docker.go:233] disabling docker service ...
	I1030 19:45:26.444931  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:26.460606  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:26.477159  446887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:26.600212  446887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:26.725587  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:26.741934  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:26.761815  446887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:26.761872  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.772368  446887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:26.772422  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.784279  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.795403  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.806323  446887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:26.821929  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.836574  446887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.857305  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.868135  446887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:26.878058  446887 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:26.878138  446887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:26.891979  446887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:26.902181  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:27.021858  446887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:27.118890  446887 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:27.118985  446887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:27.125407  446887 start.go:563] Will wait 60s for crictl version
	I1030 19:45:27.125472  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:45:27.129507  446887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:27.176630  446887 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:27.176739  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.205818  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.236431  446887 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:25.840689  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Start
	I1030 19:45:25.840860  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring networks are active...
	I1030 19:45:25.841604  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network default is active
	I1030 19:45:25.841928  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network mk-embed-certs-042402 is active
	I1030 19:45:25.842443  446965 main.go:141] libmachine: (embed-certs-042402) Getting domain xml...
	I1030 19:45:25.843267  446965 main.go:141] libmachine: (embed-certs-042402) Creating domain...
	I1030 19:45:27.094878  446965 main.go:141] libmachine: (embed-certs-042402) Waiting to get IP...
	I1030 19:45:27.095705  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.096101  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.096166  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.096079  448226 retry.go:31] will retry after 190.217394ms: waiting for machine to come up
	I1030 19:45:27.287473  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.287940  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.287966  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.287899  448226 retry.go:31] will retry after 365.943545ms: waiting for machine to come up
	I1030 19:45:27.655952  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.656374  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.656425  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.656343  448226 retry.go:31] will retry after 345.369581ms: waiting for machine to come up
	I1030 19:45:28.003856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.004367  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.004398  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.004319  448226 retry.go:31] will retry after 609.6218ms: waiting for machine to come up
	I1030 19:45:27.237629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:27.240387  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240733  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:27.240779  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240995  446887 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:27.245263  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:27.261305  446887 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:27.261440  446887 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:27.261489  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:27.301593  446887 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:27.301650  446887 ssh_runner.go:195] Run: which lz4
	I1030 19:45:27.305829  446887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:27.310384  446887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:27.310413  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:28.615219  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.615769  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.615795  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.615716  448226 retry.go:31] will retry after 672.090411ms: waiting for machine to come up
	I1030 19:45:29.289646  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:29.290179  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:29.290216  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:29.290105  448226 retry.go:31] will retry after 865.239242ms: waiting for machine to come up
	I1030 19:45:30.157223  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.157650  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.157679  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.157616  448226 retry.go:31] will retry after 833.557181ms: waiting for machine to come up
	I1030 19:45:30.993139  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.993663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.993720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.993625  448226 retry.go:31] will retry after 989.333841ms: waiting for machine to come up
	I1030 19:45:31.983978  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:31.984498  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:31.984546  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:31.984443  448226 retry.go:31] will retry after 1.534311856s: waiting for machine to come up
	I1030 19:45:28.730765  446887 crio.go:462] duration metric: took 1.424975563s to copy over tarball
	I1030 19:45:28.730868  446887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:30.907494  446887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1765829s)
	I1030 19:45:30.907536  446887 crio.go:469] duration metric: took 2.176738354s to extract the tarball
	I1030 19:45:30.907546  446887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:30.944242  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:30.986812  446887 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:30.986839  446887 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:30.986872  446887 kubeadm.go:934] updating node { 192.168.39.92 8444 v1.31.2 crio true true} ...
	I1030 19:45:30.987042  446887 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-768989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:30.987145  446887 ssh_runner.go:195] Run: crio config
	I1030 19:45:31.037466  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:31.037496  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:31.037511  446887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:31.037544  446887 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-768989 NodeName:default-k8s-diff-port-768989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:31.037735  446887 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-768989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:31.037815  446887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:31.047808  446887 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:31.047885  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:31.057074  446887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1030 19:45:31.073022  446887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:31.088919  446887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1030 19:45:31.105357  446887 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:31.109207  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:31.121329  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:31.234078  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:31.251028  446887 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989 for IP: 192.168.39.92
	I1030 19:45:31.251057  446887 certs.go:194] generating shared ca certs ...
	I1030 19:45:31.251080  446887 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:31.251287  446887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:31.251342  446887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:31.251354  446887 certs.go:256] generating profile certs ...
	I1030 19:45:31.251480  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/client.key
	I1030 19:45:31.251567  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key.eeeafde8
	I1030 19:45:31.251620  446887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key
	I1030 19:45:31.251788  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:31.251834  446887 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:31.251848  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:31.251888  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:31.251931  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:31.251963  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:31.252024  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:31.253127  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:31.293822  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:31.334804  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:31.366955  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:31.396042  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 19:45:31.428748  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1030 19:45:31.452866  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:31.476407  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:45:31.500375  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:31.523909  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:31.547532  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:31.571163  446887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:31.587969  446887 ssh_runner.go:195] Run: openssl version
	I1030 19:45:31.593866  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:31.604538  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609348  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609419  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.615446  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:31.626640  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:31.640948  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646702  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646751  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.654365  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:31.668538  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:31.679201  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683631  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683693  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.689362  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:31.699804  446887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:31.704445  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:31.710558  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:31.718563  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:31.724745  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:31.731125  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:31.736828  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:31.742434  446887 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:31.742604  446887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:31.742654  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.779319  446887 cri.go:89] found id: ""
	I1030 19:45:31.779416  446887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:31.789556  446887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:31.789576  446887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:31.789622  446887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:31.799817  446887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:31.800824  446887 kubeconfig.go:125] found "default-k8s-diff-port-768989" server: "https://192.168.39.92:8444"
	I1030 19:45:31.803207  446887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:31.812876  446887 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I1030 19:45:31.812909  446887 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:31.812924  446887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:31.812984  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.858070  446887 cri.go:89] found id: ""
	I1030 19:45:31.858174  446887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:31.874923  446887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:31.885243  446887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:31.885275  446887 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:31.885321  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1030 19:45:31.894394  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:31.894453  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:31.903760  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1030 19:45:31.912344  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:31.912410  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:31.921458  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.930426  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:31.930499  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.940008  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1030 19:45:31.949578  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:31.949645  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:31.959022  446887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:31.968457  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.069017  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.985574  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.191887  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.273266  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.400584  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:33.400686  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:33.520596  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:33.521020  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:33.521041  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:33.520992  448226 retry.go:31] will retry after 1.787777673s: waiting for machine to come up
	I1030 19:45:35.310399  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:35.310878  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:35.310906  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:35.310833  448226 retry.go:31] will retry after 2.264310439s: waiting for machine to come up
	I1030 19:45:37.577787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:37.578276  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:37.578310  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:37.578214  448226 retry.go:31] will retry after 2.384410161s: waiting for machine to come up
	I1030 19:45:33.901397  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.400978  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.901476  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.401772  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.420824  446887 api_server.go:72] duration metric: took 2.020238714s to wait for apiserver process to appear ...
	I1030 19:45:35.420862  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:35.420889  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.795897  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.795931  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.795948  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.848032  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.848069  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.921286  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.930778  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:37.930822  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.421866  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.429247  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.429291  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.921655  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.928650  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.928680  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:39.421195  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:39.425565  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:45:39.433509  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:39.433543  446887 api_server.go:131] duration metric: took 4.01267362s to wait for apiserver health ...
	I1030 19:45:39.433555  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:39.433564  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:39.435645  446887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:39.437042  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:39.456091  446887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:39.477617  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:39.485998  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:39.486041  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:39.486051  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:39.486061  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:39.486071  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:39.486082  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:45:39.486087  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:39.486092  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:39.486095  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:45:39.486101  446887 system_pods.go:74] duration metric: took 8.467537ms to wait for pod list to return data ...
	I1030 19:45:39.486110  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:39.490771  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:39.490793  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:39.490805  446887 node_conditions.go:105] duration metric: took 4.690594ms to run NodePressure ...
	I1030 19:45:39.490821  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:39.752369  446887 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757080  446887 kubeadm.go:739] kubelet initialised
	I1030 19:45:39.757105  446887 kubeadm.go:740] duration metric: took 4.707251ms waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757114  446887 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:39.762374  446887 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.766904  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766934  446887 pod_ready.go:82] duration metric: took 4.529466ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.766948  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766958  446887 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.771681  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771705  446887 pod_ready.go:82] duration metric: took 4.73772ms for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.771715  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771722  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.776170  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776199  446887 pod_ready.go:82] duration metric: took 4.470353ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.776211  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776220  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.881949  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.881988  446887 pod_ready.go:82] duration metric: took 105.756203ms for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.882027  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.882042  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.281665  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281703  446887 pod_ready.go:82] duration metric: took 399.651747ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.281716  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281725  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.680827  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680861  446887 pod_ready.go:82] duration metric: took 399.128654ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.680873  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680883  446887 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:41.086176  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086203  446887 pod_ready.go:82] duration metric: took 405.311117ms for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:41.086216  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086225  446887 pod_ready.go:39] duration metric: took 1.32910228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:41.086246  446887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:45:41.100836  446887 ops.go:34] apiserver oom_adj: -16
	I1030 19:45:41.100871  446887 kubeadm.go:597] duration metric: took 9.31128777s to restartPrimaryControlPlane
	I1030 19:45:41.100887  446887 kubeadm.go:394] duration metric: took 9.358460424s to StartCluster
	I1030 19:45:41.100915  446887 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.101046  446887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:45:41.103578  446887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.103910  446887 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:45:41.103995  446887 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:45:41.104111  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:41.104131  446887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104151  446887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104159  446887 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:45:41.104175  446887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104198  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104207  446887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104218  446887 addons.go:243] addon metrics-server should already be in state true
	I1030 19:45:41.104153  446887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104255  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104258  446887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-768989"
	I1030 19:45:41.104672  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104683  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104694  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104718  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104728  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104730  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.105606  446887 out.go:177] * Verifying Kubernetes components...
	I1030 19:45:41.107136  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:41.121415  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I1030 19:45:41.122053  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.122694  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.122721  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.123073  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.123682  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.123733  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.125497  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1030 19:45:41.125546  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I1030 19:45:41.125878  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.125962  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.126425  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126445  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126465  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126507  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126840  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.126897  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.127362  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.127392  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.127590  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.131397  446887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.131424  446887 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:45:41.131457  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.131834  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.131877  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.143183  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1030 19:45:41.143221  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I1030 19:45:41.143628  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.143765  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.144231  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144249  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144369  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144392  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144657  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144766  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144879  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.144926  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.146739  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.146913  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.148740  446887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:45:41.148794  446887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:45:41.149853  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1030 19:45:41.150250  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.150397  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:45:41.150435  446887 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:45:41.150462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150525  446887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.150545  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:45:41.150562  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150763  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.150781  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.151168  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.152135  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.152184  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.154133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154425  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154625  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.154654  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154811  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.154996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155033  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.155059  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.155145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.155310  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.155345  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155464  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155548  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.168971  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1030 19:45:41.169445  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.169946  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.169969  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.170335  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.170508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.172162  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.172378  446887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.172394  446887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:45:41.172410  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.175214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.175643  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175795  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.175978  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.176133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.176301  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.324093  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:41.381986  446887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:41.439497  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:45:41.439522  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:45:41.448751  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.486707  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:45:41.486736  446887 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:45:41.514478  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.514513  446887 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:45:41.546821  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.590509  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.879189  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879224  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879548  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:41.879597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879608  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.879622  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879632  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879868  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879886  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.889008  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.889024  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.889273  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.889290  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499223  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499621  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499632  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499689  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499969  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499984  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499996  446887 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-768989"
	I1030 19:45:42.598713  446887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008157275s)
	I1030 19:45:42.598770  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.598782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599088  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599109  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.599117  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.599143  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:42.599201  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599447  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599461  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.601840  446887 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1030 19:45:39.963885  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:39.964308  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:39.964346  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:39.964250  448226 retry.go:31] will retry after 4.32150593s: waiting for machine to come up
	I1030 19:45:42.603197  446887 addons.go:510] duration metric: took 1.499214294s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1030 19:45:43.386074  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:45.631177  447486 start.go:364] duration metric: took 3m33.722307877s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:45:45.631272  447486 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:45.631284  447486 fix.go:54] fixHost starting: 
	I1030 19:45:45.631708  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:45.631767  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:45.648654  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1030 19:45:45.649098  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:45.649552  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:45:45.649574  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:45.649848  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:45.650005  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:45:45.650153  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:45:45.651624  447486 fix.go:112] recreateIfNeeded on old-k8s-version-516975: state=Stopped err=<nil>
	I1030 19:45:45.651661  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	W1030 19:45:45.651805  447486 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:45.654065  447486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	I1030 19:45:45.655382  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .Start
	I1030 19:45:45.655554  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:45:45.656134  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:45:45.656518  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:45:45.656885  447486 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:45:45.657501  447486 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:45:44.289530  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289944  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has current primary IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289965  446965 main.go:141] libmachine: (embed-certs-042402) Found IP for machine: 192.168.61.235
	I1030 19:45:44.289978  446965 main.go:141] libmachine: (embed-certs-042402) Reserving static IP address...
	I1030 19:45:44.290419  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.290450  446965 main.go:141] libmachine: (embed-certs-042402) Reserved static IP address: 192.168.61.235
	I1030 19:45:44.290469  446965 main.go:141] libmachine: (embed-certs-042402) DBG | skip adding static IP to network mk-embed-certs-042402 - found existing host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"}
	I1030 19:45:44.290502  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Getting to WaitForSSH function...
	I1030 19:45:44.290519  446965 main.go:141] libmachine: (embed-certs-042402) Waiting for SSH to be available...
	I1030 19:45:44.292418  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292684  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.292727  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292750  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH client type: external
	I1030 19:45:44.292785  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa (-rw-------)
	I1030 19:45:44.292839  446965 main.go:141] libmachine: (embed-certs-042402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:44.292856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | About to run SSH command:
	I1030 19:45:44.292873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | exit 0
	I1030 19:45:44.414810  446965 main.go:141] libmachine: (embed-certs-042402) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:44.415211  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetConfigRaw
	I1030 19:45:44.416039  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.418830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419269  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.419303  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419529  446965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/config.json ...
	I1030 19:45:44.419832  446965 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:44.419859  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:44.420102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.422359  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422704  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.422729  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422878  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.423072  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423217  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423355  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.423493  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.423677  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.423685  446965 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:44.527214  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:44.527248  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527526  446965 buildroot.go:166] provisioning hostname "embed-certs-042402"
	I1030 19:45:44.527562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527793  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.530474  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.530830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.530856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.531041  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.531243  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531432  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531563  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.531736  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.531958  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.531979  446965 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-042402 && echo "embed-certs-042402" | sudo tee /etc/hostname
	I1030 19:45:44.656963  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-042402
	
	I1030 19:45:44.656996  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.659958  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660361  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.660397  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660643  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.660842  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661122  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.661295  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.661469  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.661484  446965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-042402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-042402/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-042402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:44.771688  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:44.771728  446965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:44.771755  446965 buildroot.go:174] setting up certificates
	I1030 19:45:44.771766  446965 provision.go:84] configureAuth start
	I1030 19:45:44.771780  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.772120  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.774838  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775271  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.775298  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775424  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.777432  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777765  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.777793  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777910  446965 provision.go:143] copyHostCerts
	I1030 19:45:44.777990  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:44.778006  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:44.778057  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:44.778147  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:44.778155  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:44.778174  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:44.778229  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:44.778237  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:44.778253  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:44.778360  446965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.embed-certs-042402 san=[127.0.0.1 192.168.61.235 embed-certs-042402 localhost minikube]
	I1030 19:45:45.019172  446965 provision.go:177] copyRemoteCerts
	I1030 19:45:45.019234  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:45.019265  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.022052  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022402  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.022435  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022590  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.022788  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.022969  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.023123  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.104733  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:45.128256  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:45:45.150758  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:45:45.173233  446965 provision.go:87] duration metric: took 401.450922ms to configureAuth
	I1030 19:45:45.173268  446965 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:45.173465  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:45.173562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.176259  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.176698  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176826  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.177025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177190  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177364  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.177554  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.177724  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.177737  446965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:45.396562  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:45.396593  446965 machine.go:96] duration metric: took 976.740759ms to provisionDockerMachine
	I1030 19:45:45.396606  446965 start.go:293] postStartSetup for "embed-certs-042402" (driver="kvm2")
	I1030 19:45:45.396616  446965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:45.396644  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.397007  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:45.397048  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.399581  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.399930  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.399955  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.400045  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.400219  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.400373  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.400483  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.481722  446965 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:45.487207  446965 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:45.487231  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:45.487304  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:45.487398  446965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:45.487516  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:45.500340  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:45.524930  446965 start.go:296] duration metric: took 128.310254ms for postStartSetup
	I1030 19:45:45.524972  446965 fix.go:56] duration metric: took 19.709339085s for fixHost
	I1030 19:45:45.524993  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.527426  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527751  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.527775  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.528145  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528326  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528450  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.528591  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.528804  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.528815  446965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:45.630961  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317545.604586107
	
	I1030 19:45:45.630997  446965 fix.go:216] guest clock: 1730317545.604586107
	I1030 19:45:45.631020  446965 fix.go:229] Guest: 2024-10-30 19:45:45.604586107 +0000 UTC Remote: 2024-10-30 19:45:45.524975841 +0000 UTC m=+302.540999350 (delta=79.610266ms)
	I1030 19:45:45.631054  446965 fix.go:200] guest clock delta is within tolerance: 79.610266ms
	I1030 19:45:45.631062  446965 start.go:83] releasing machines lock for "embed-certs-042402", held for 19.81546348s
	I1030 19:45:45.631109  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.631396  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:45.634114  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634524  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.634558  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634739  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635353  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635646  446965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:45.635692  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.635746  446965 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:45.635775  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.638260  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638639  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.638694  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638718  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639108  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.639128  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.639160  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639260  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639371  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639440  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639509  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.639581  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639723  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.747515  446965 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:45.754851  446965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:45.904471  446965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:45.911348  446965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:45.911428  446965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:45.928273  446965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:45.928299  446965 start.go:495] detecting cgroup driver to use...
	I1030 19:45:45.928381  446965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:45.949100  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:45.963284  446965 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:45.963362  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:45.976952  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:45.991367  446965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:46.104670  446965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:46.254049  446965 docker.go:233] disabling docker service ...
	I1030 19:45:46.254130  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:46.273226  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:46.290211  446965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:46.491658  446965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:46.637447  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:46.654517  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:46.679786  446965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:46.679879  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.695487  446965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:46.695570  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.708974  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.724847  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.736912  446965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:46.749015  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.761190  446965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.780198  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.790865  446965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:46.800950  446965 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:46.801029  446965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:46.814792  446965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:46.825490  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:46.952367  446965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:47.054874  446965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:47.054962  446965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:47.061036  446965 start.go:563] Will wait 60s for crictl version
	I1030 19:45:47.061105  446965 ssh_runner.go:195] Run: which crictl
	I1030 19:45:47.064917  446965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:47.101690  446965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:47.101796  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.131286  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.166314  446965 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:47.167861  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:47.171097  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171438  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:47.171466  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171737  446965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:47.177796  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:47.191930  446965 kubeadm.go:883] updating cluster {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:47.192090  446965 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:47.192149  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:47.231586  446965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:47.231672  446965 ssh_runner.go:195] Run: which lz4
	I1030 19:45:47.236190  446965 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:47.240803  446965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:47.240888  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:45.386683  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:47.386771  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:48.387313  446887 node_ready.go:49] node "default-k8s-diff-port-768989" has status "Ready":"True"
	I1030 19:45:48.387344  446887 node_ready.go:38] duration metric: took 7.005318984s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:48.387359  446887 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:48.395198  446887 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401276  446887 pod_ready.go:93] pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:48.401306  446887 pod_ready.go:82] duration metric: took 6.071305ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401321  446887 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:47.003397  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:45:47.004281  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.004710  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.004787  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.004695  448432 retry.go:31] will retry after 234.659459ms: waiting for machine to come up
	I1030 19:45:47.241308  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.241838  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.241863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.241802  448432 retry.go:31] will retry after 350.804975ms: waiting for machine to come up
	I1030 19:45:47.594533  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.595106  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.595139  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.595044  448432 retry.go:31] will retry after 448.637889ms: waiting for machine to come up
	I1030 19:45:48.045858  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.046358  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.046386  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.046315  448432 retry.go:31] will retry after 543.947609ms: waiting for machine to come up
	I1030 19:45:48.592474  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.592908  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.592937  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.592875  448432 retry.go:31] will retry after 744.106735ms: waiting for machine to come up
	I1030 19:45:49.338345  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:49.338833  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:49.338857  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:49.338795  448432 retry.go:31] will retry after 927.743369ms: waiting for machine to come up
	I1030 19:45:50.267844  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:50.268359  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:50.268390  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:50.268324  448432 retry.go:31] will retry after 829.540351ms: waiting for machine to come up
	I1030 19:45:51.099379  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:51.099863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:51.099893  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:51.099820  448432 retry.go:31] will retry after 898.768304ms: waiting for machine to come up
	I1030 19:45:48.672337  446965 crio.go:462] duration metric: took 1.436158626s to copy over tarball
	I1030 19:45:48.672439  446965 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:50.859055  446965 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186572123s)
	I1030 19:45:50.859101  446965 crio.go:469] duration metric: took 2.186725028s to extract the tarball
	I1030 19:45:50.859113  446965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:50.896570  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:50.946526  446965 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:50.946558  446965 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:50.946567  446965 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.31.2 crio true true} ...
	I1030 19:45:50.946668  446965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-042402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:50.946748  446965 ssh_runner.go:195] Run: crio config
	I1030 19:45:50.992305  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:50.992337  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:50.992348  446965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:50.992374  446965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-042402 NodeName:embed-certs-042402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:50.992530  446965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-042402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:50.992616  446965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:51.002586  446965 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:51.002668  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:51.012058  446965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1030 19:45:51.028645  446965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:51.044912  446965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1030 19:45:51.060991  446965 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:51.064808  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:51.076790  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:51.205861  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:51.224763  446965 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402 for IP: 192.168.61.235
	I1030 19:45:51.224791  446965 certs.go:194] generating shared ca certs ...
	I1030 19:45:51.224812  446965 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:51.224986  446965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:51.225046  446965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:51.225059  446965 certs.go:256] generating profile certs ...
	I1030 19:45:51.225175  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/client.key
	I1030 19:45:51.225256  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key.f6f7691e
	I1030 19:45:51.225314  446965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key
	I1030 19:45:51.225469  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:51.225518  446965 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:51.225540  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:51.225574  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:51.225612  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:51.225651  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:51.225714  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:51.226718  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:51.278345  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:51.308707  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:51.349986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:51.382176  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1030 19:45:51.426538  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 19:45:51.457131  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:51.481165  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:45:51.505285  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:51.533986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:51.562660  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:51.586002  446965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:51.602544  446965 ssh_runner.go:195] Run: openssl version
	I1030 19:45:51.608479  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:51.620650  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625243  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625294  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.631138  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:51.643167  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:51.655128  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659528  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659600  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.665370  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:51.676314  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:51.687386  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692170  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692228  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.697897  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:51.709561  446965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:51.715357  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:51.723291  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:51.731362  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:51.739724  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:51.747383  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:51.753472  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:51.759462  446965 kubeadm.go:392] StartCluster: {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:51.759605  446965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:51.759702  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.806863  446965 cri.go:89] found id: ""
	I1030 19:45:51.806956  446965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:51.818195  446965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:51.818218  446965 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:51.818274  446965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:51.828762  446965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:51.830149  446965 kubeconfig.go:125] found "embed-certs-042402" server: "https://192.168.61.235:8443"
	I1030 19:45:51.832269  446965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:51.842769  446965 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.235
	I1030 19:45:51.842808  446965 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:51.842823  446965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:51.842889  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.887128  446965 cri.go:89] found id: ""
	I1030 19:45:51.887209  446965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:51.911918  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:51.922685  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:51.922714  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:51.922770  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:45:51.935548  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:51.935620  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:51.948635  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:45:51.961647  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:51.961745  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:51.975880  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:45:51.986852  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:51.986922  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:52.001290  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:45:52.015249  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:52.015333  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:52.026657  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:52.038560  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:52.167697  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:50.408274  446887 pod_ready.go:103] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:51.407818  446887 pod_ready.go:93] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.407850  446887 pod_ready.go:82] duration metric: took 3.006520689s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.407865  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413452  446887 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.413481  446887 pod_ready.go:82] duration metric: took 5.607077ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413495  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:52.000678  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:52.001196  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:52.001235  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:52.001148  448432 retry.go:31] will retry after 1.750749509s: waiting for machine to come up
	I1030 19:45:53.753607  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:53.754013  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:53.754038  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:53.753950  448432 retry.go:31] will retry after 1.537350682s: waiting for machine to come up
	I1030 19:45:55.293910  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:55.294396  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:55.294427  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:55.294336  448432 retry.go:31] will retry after 2.151521323s: waiting for machine to come up
	I1030 19:45:53.477258  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.309509141s)
	I1030 19:45:53.477309  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.696850  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.768419  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.863913  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:53.864018  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.364235  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.864820  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.887333  446965 api_server.go:72] duration metric: took 1.023419155s to wait for apiserver process to appear ...
	I1030 19:45:54.887363  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:54.887399  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:54.887929  446965 api_server.go:269] stopped: https://192.168.61.235:8443/healthz: Get "https://192.168.61.235:8443/healthz": dial tcp 192.168.61.235:8443: connect: connection refused
	I1030 19:45:55.388396  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.610916  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:57.610951  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:57.610972  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.745722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.745782  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:57.887887  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.895296  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.895352  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:54.167893  446887 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:54.920921  446887 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.920954  446887 pod_ready.go:82] duration metric: took 3.507449937s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.920974  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927123  446887 pod_ready.go:93] pod "kube-proxy-tsr5q" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.927150  446887 pod_ready.go:82] duration metric: took 6.167749ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927164  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932513  446887 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.932540  446887 pod_ready.go:82] duration metric: took 5.367579ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932557  446887 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:56.939174  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.388076  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.393192  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:58.393235  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:58.887710  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.891923  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:45:58.897783  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:58.897816  446965 api_server.go:131] duration metric: took 4.010443495s to wait for apiserver health ...
	I1030 19:45:58.897836  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:58.897844  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:58.899669  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:57.447894  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:57.448365  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:57.448392  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:57.448320  448432 retry.go:31] will retry after 2.439938206s: waiting for machine to come up
	I1030 19:45:59.889685  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:59.890166  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:59.890205  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:59.890113  448432 retry.go:31] will retry after 3.836080386s: waiting for machine to come up
	I1030 19:45:58.901122  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:58.924765  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:58.946342  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:58.956378  446965 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:58.956412  446965 system_pods.go:61] "coredns-7c65d6cfc9-tv6kc" [d752975e-e126-4d22-9b35-b9f57d1170b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:58.956419  446965 system_pods.go:61] "etcd-embed-certs-042402" [fa9b90f6-82b2-448a-ad86-9cbba45a4c2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:58.956427  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [48af3136-74d9-4062-bb9a-e48dafd311a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:58.956436  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [0ae60724-6634-464a-af2f-e08148fb3eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:58.956445  446965 system_pods.go:61] "kube-proxy-qwjr9" [309ee447-8d52-49e7-a805-2b7c0af2a3bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 19:45:58.956450  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [f82ff11e-8305-4d05-b370-fd89693e5ad1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:58.956454  446965 system_pods.go:61] "metrics-server-6867b74b74-4x9t6" [1160789d-9462-4d1d-9f84-5ded8394bd4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:58.956459  446965 system_pods.go:61] "storage-provisioner" [d1559440-b14a-4c2a-a52e-ba39afb01f94] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 19:45:58.956465  446965 system_pods.go:74] duration metric: took 10.103898ms to wait for pod list to return data ...
	I1030 19:45:58.956473  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:58.960150  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:58.960182  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:58.960195  446965 node_conditions.go:105] duration metric: took 3.712942ms to run NodePressure ...
	I1030 19:45:58.960219  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:59.284558  446965 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289073  446965 kubeadm.go:739] kubelet initialised
	I1030 19:45:59.289095  446965 kubeadm.go:740] duration metric: took 4.508144ms waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289104  446965 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:59.293538  446965 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:01.298780  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.940597  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:01.439118  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.011617  446736 start.go:364] duration metric: took 52.494265895s to acquireMachinesLock for "no-preload-960512"
	I1030 19:46:05.011674  446736 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:46:05.011683  446736 fix.go:54] fixHost starting: 
	I1030 19:46:05.012022  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:05.012087  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:05.029067  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I1030 19:46:05.029484  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:05.030010  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:05.030039  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:05.030461  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:05.030690  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:05.030854  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:05.032380  446736 fix.go:112] recreateIfNeeded on no-preload-960512: state=Stopped err=<nil>
	I1030 19:46:05.032408  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	W1030 19:46:05.032566  446736 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:46:05.035693  446736 out.go:177] * Restarting existing kvm2 VM for "no-preload-960512" ...
	I1030 19:46:03.727617  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728028  447486 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:46:03.728046  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:46:03.728062  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728565  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:46:03.728600  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:46:03.728616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.728639  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | skip adding static IP to network mk-old-k8s-version-516975 - found existing host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"}
	I1030 19:46:03.728657  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:46:03.730754  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731085  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.731121  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731145  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:46:03.731212  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:46:03.731252  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:03.731275  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:46:03.731289  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:46:03.862423  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:03.862832  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:46:03.863519  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:03.865977  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866262  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.866297  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866512  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:46:03.866755  447486 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:03.866783  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:03.866994  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.869079  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869384  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.869410  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869603  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.869787  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.869949  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.870102  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.870243  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.870468  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.870481  447486 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:03.982986  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:03.983018  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983285  447486 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:46:03.983319  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983502  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.986203  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986576  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.986615  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986765  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.986983  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987126  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987258  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.987419  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.987696  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.987719  447486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:46:04.112692  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:46:04.112719  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.115948  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116283  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.116309  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116482  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.116667  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116842  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116966  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.117104  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.117275  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.117290  447486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:04.235988  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:04.236032  447486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:04.236098  447486 buildroot.go:174] setting up certificates
	I1030 19:46:04.236111  447486 provision.go:84] configureAuth start
	I1030 19:46:04.236124  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:04.236500  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:04.239328  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.239707  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.239739  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.240009  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.242118  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242440  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.242505  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242683  447486 provision.go:143] copyHostCerts
	I1030 19:46:04.242766  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:04.242787  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:04.242847  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:04.242972  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:04.242986  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:04.243011  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:04.243072  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:04.243079  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:04.243095  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:04.243153  447486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:46:04.355003  447486 provision.go:177] copyRemoteCerts
	I1030 19:46:04.355061  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:04.355092  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.357788  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358153  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.358191  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358397  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.358630  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.358809  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.358970  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.446614  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:04.473708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:46:04.497721  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:46:04.521806  447486 provision.go:87] duration metric: took 285.682041ms to configureAuth
	I1030 19:46:04.521836  447486 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:04.521999  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:46:04.522072  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.524616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525034  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.525065  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525282  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.525452  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525616  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.525916  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.526129  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.526145  447486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:04.766663  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:04.766697  447486 machine.go:96] duration metric: took 899.924211ms to provisionDockerMachine
	I1030 19:46:04.766709  447486 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:46:04.766720  447486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:04.766745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:04.767081  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:04.767114  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.769995  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770401  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.770428  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770580  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.770762  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.770973  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.771132  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.858006  447486 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:04.862295  447486 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:04.862324  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:04.862387  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:04.862475  447486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:04.862612  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:04.872541  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:04.896306  447486 start.go:296] duration metric: took 129.577956ms for postStartSetup
	I1030 19:46:04.896360  447486 fix.go:56] duration metric: took 19.265077419s for fixHost
	I1030 19:46:04.896383  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.899009  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899397  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.899429  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899538  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.899739  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.899906  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.900101  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.900271  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.900510  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.900525  447486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:05.011439  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317564.967936408
	
	I1030 19:46:05.011464  447486 fix.go:216] guest clock: 1730317564.967936408
	I1030 19:46:05.011472  447486 fix.go:229] Guest: 2024-10-30 19:46:04.967936408 +0000 UTC Remote: 2024-10-30 19:46:04.896364572 +0000 UTC m=+233.135558535 (delta=71.571836ms)
	I1030 19:46:05.011516  447486 fix.go:200] guest clock delta is within tolerance: 71.571836ms
	I1030 19:46:05.011525  447486 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 19.380292064s
	I1030 19:46:05.011552  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.011853  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:05.014722  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015072  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.015100  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015225  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.015808  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016002  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016107  447486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:05.016155  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.016265  447486 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:05.016296  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.018976  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019189  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019326  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019370  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019517  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019604  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019632  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019708  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.019830  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019918  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.019995  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.020077  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.020157  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.020295  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.100852  447486 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:05.127673  447486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:05.279889  447486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:05.285900  447486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:05.285976  447486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:05.304763  447486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:05.304791  447486 start.go:495] detecting cgroup driver to use...
	I1030 19:46:05.304862  447486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:05.325729  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:05.343047  447486 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:05.343128  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:05.358748  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:05.374769  447486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:05.492589  447486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:05.639943  447486 docker.go:233] disabling docker service ...
	I1030 19:46:05.640039  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:05.655449  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:05.669688  447486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:05.814658  447486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:05.957944  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:05.972122  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:05.990577  447486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:46:05.990653  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.000834  447486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:06.000907  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.011678  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.022051  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.032515  447486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:06.043296  447486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:06.053123  447486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:06.053170  447486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:06.067625  447486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:06.081306  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:06.221181  447486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:06.321848  447486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:06.321926  447486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:06.329697  447486 start.go:563] Will wait 60s for crictl version
	I1030 19:46:06.329757  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:06.333980  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:06.381198  447486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:06.381290  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.410365  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.442329  447486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:46:06.443471  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:06.446233  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446621  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:06.446653  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446822  447486 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:06.451216  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:06.464477  447486 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:06.464607  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:46:06.464668  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:06.513123  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:06.513205  447486 ssh_runner.go:195] Run: which lz4
	I1030 19:46:06.517252  447486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:46:06.521358  447486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:46:06.521384  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:46:03.300213  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.301139  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.303015  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:03.939240  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.940212  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.942062  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.037179  446736 main.go:141] libmachine: (no-preload-960512) Calling .Start
	I1030 19:46:05.037388  446736 main.go:141] libmachine: (no-preload-960512) Ensuring networks are active...
	I1030 19:46:05.038384  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network default is active
	I1030 19:46:05.038793  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network mk-no-preload-960512 is active
	I1030 19:46:05.039208  446736 main.go:141] libmachine: (no-preload-960512) Getting domain xml...
	I1030 19:46:05.040083  446736 main.go:141] libmachine: (no-preload-960512) Creating domain...
	I1030 19:46:06.366674  446736 main.go:141] libmachine: (no-preload-960512) Waiting to get IP...
	I1030 19:46:06.367568  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.368016  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.368083  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.367984  448568 retry.go:31] will retry after 216.900908ms: waiting for machine to come up
	I1030 19:46:06.586638  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.587182  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.587213  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.587121  448568 retry.go:31] will retry after 319.082011ms: waiting for machine to come up
	I1030 19:46:06.907974  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.908650  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.908683  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.908581  448568 retry.go:31] will retry after 418.339306ms: waiting for machine to come up
	I1030 19:46:07.328241  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.329035  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.329065  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.328988  448568 retry.go:31] will retry after 523.624135ms: waiting for machine to come up
	I1030 19:46:07.855234  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.855944  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.855970  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.855849  448568 retry.go:31] will retry after 556.06146ms: waiting for machine to come up
	I1030 19:46:08.413474  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:08.414059  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:08.414098  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:08.413947  448568 retry.go:31] will retry after 713.043389ms: waiting for machine to come up
	I1030 19:46:09.128274  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:09.128737  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:09.128762  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:09.128689  448568 retry.go:31] will retry after 1.096111238s: waiting for machine to come up
	I1030 19:46:08.144772  447486 crio.go:462] duration metric: took 1.627547543s to copy over tarball
	I1030 19:46:08.144845  447486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:46:11.104192  447486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959302647s)
	I1030 19:46:11.104228  447486 crio.go:469] duration metric: took 2.959426051s to extract the tarball
	I1030 19:46:11.104240  447486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:46:11.146584  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:11.183766  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:11.183797  447486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:11.183889  447486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.183917  447486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.183932  447486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.183968  447486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.184087  447486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.183972  447486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:46:11.183969  447486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.183928  447486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.185976  447486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.186001  447486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:46:11.186043  447486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.186053  447486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.186046  447486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.185977  447486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.186108  447486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.186150  447486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.348134  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391191  447486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:46:11.391327  447486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391399  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.396693  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.400062  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.406656  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:46:11.410534  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.410590  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.441896  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.460400  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.482465  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.554431  447486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:46:11.554480  447486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.554549  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.610376  447486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:46:11.610424  447486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:46:11.610471  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616060  447486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:46:11.616104  447486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.616153  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616177  447486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:46:11.616217  447486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.616282  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.617473  447486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:46:11.617502  447486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.617535  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652124  447486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:46:11.652185  447486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.652228  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.652233  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652237  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.652331  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.652376  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.652433  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.652483  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.798844  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.798859  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:46:11.798873  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.798949  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.799075  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.799179  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.799182  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:08.303450  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.303482  446965 pod_ready.go:82] duration metric: took 9.009918893s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.303498  446965 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312186  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.312213  446965 pod_ready.go:82] duration metric: took 8.706192ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312228  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:10.320161  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.439107  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:12.439663  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.226842  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:10.227315  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:10.227346  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:10.227261  448568 retry.go:31] will retry after 1.165335625s: waiting for machine to come up
	I1030 19:46:11.394231  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:11.394817  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:11.394851  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:11.394763  448568 retry.go:31] will retry after 1.292571083s: waiting for machine to come up
	I1030 19:46:12.688486  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:12.688919  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:12.688965  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:12.688862  448568 retry.go:31] will retry after 1.97645889s: waiting for machine to come up
	I1030 19:46:14.667783  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:14.668245  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:14.668278  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:14.668200  448568 retry.go:31] will retry after 2.020488863s: waiting for machine to come up
	I1030 19:46:11.942258  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.942265  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.942365  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.942352  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.942421  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.946933  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:12.064951  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:46:12.067930  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:12.067990  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:46:12.068057  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:46:12.068078  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:46:12.083122  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:46:12.107265  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:46:13.402970  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:13.551979  447486 cache_images.go:92] duration metric: took 2.368158873s to LoadCachedImages
	W1030 19:46:13.552080  447486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:46:13.552096  447486 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:46:13.552211  447486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:13.552276  447486 ssh_runner.go:195] Run: crio config
	I1030 19:46:13.605982  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:46:13.606008  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:13.606020  447486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:13.606049  447486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:46:13.606223  447486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:13.606299  447486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:46:13.616954  447486 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:13.617034  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:13.627440  447486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:46:13.644821  447486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:13.662070  447486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:46:13.679198  447486 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:13.682992  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:13.697879  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:13.819975  447486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:13.838669  447486 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:46:13.838695  447486 certs.go:194] generating shared ca certs ...
	I1030 19:46:13.838716  447486 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:13.838888  447486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:13.838946  447486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:13.838962  447486 certs.go:256] generating profile certs ...
	I1030 19:46:13.839064  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:46:13.839149  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:46:13.839208  447486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:46:13.839375  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:13.839429  447486 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:13.839442  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:13.839476  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:13.839509  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:13.839545  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:13.839609  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:13.840381  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:13.868947  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:13.923848  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:13.973167  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:14.009333  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:46:14.042397  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:14.073927  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:14.109209  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:46:14.135708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:14.162145  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:14.186176  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:14.210362  447486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:14.228727  447486 ssh_runner.go:195] Run: openssl version
	I1030 19:46:14.234436  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:14.245497  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250026  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250077  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.255727  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:14.266674  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:14.277813  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282378  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282435  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.288338  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:14.300057  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:14.312295  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317488  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317555  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.323518  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:14.335182  447486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:14.339998  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:14.346145  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:14.352474  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:14.358687  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:14.364275  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:14.370038  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:14.376051  447486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:14.376144  447486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:14.376187  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.423395  447486 cri.go:89] found id: ""
	I1030 19:46:14.423477  447486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:14.435404  447486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:14.435485  447486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:14.435558  447486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:14.448035  447486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:14.448911  447486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:14.449557  447486 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-516975" cluster setting kubeconfig missing "old-k8s-version-516975" context setting]
	I1030 19:46:14.450419  447486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:14.452252  447486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:14.462634  447486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I1030 19:46:14.462676  447486 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:14.462693  447486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:14.462750  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.508286  447486 cri.go:89] found id: ""
	I1030 19:46:14.508380  447486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:14.527996  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:14.539011  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:14.539037  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:14.539094  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:14.550159  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:14.550243  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:14.561350  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:14.571353  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:14.571430  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:14.584480  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.598307  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:14.598400  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.611632  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:14.621644  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:14.621705  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:14.632161  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:14.642295  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:14.783130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.694839  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.923329  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.052124  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.143607  447486 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:16.143710  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:16.643943  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:13.245727  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:13.702440  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.702472  446965 pod_ready.go:82] duration metric: took 5.390235543s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.702497  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948519  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.948549  446965 pod_ready.go:82] duration metric: took 246.042214ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948565  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958077  446965 pod_ready.go:93] pod "kube-proxy-qwjr9" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.958108  446965 pod_ready.go:82] duration metric: took 9.534813ms for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958122  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974906  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.974931  446965 pod_ready.go:82] duration metric: took 16.800547ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974944  446965 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:15.982433  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:17.983261  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:14.440176  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.939769  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.690435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:16.690908  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:16.690997  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:16.690904  448568 retry.go:31] will retry after 2.729556206s: waiting for machine to come up
	I1030 19:46:19.423740  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:19.424246  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:19.424271  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:19.424195  448568 retry.go:31] will retry after 2.822049517s: waiting for machine to come up
	I1030 19:46:17.144678  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.644772  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.144037  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.644437  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.144273  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.643801  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.144200  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.644764  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.143898  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.643960  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.481213  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.981619  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:19.438946  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:21.938706  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.247395  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:22.247840  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:22.247869  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:22.247813  448568 retry.go:31] will retry after 5.243633747s: waiting for machine to come up
	I1030 19:46:22.144625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.644446  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.144207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.644001  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.143787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.644166  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.144397  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.644654  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.144214  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.644275  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.482032  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.981111  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:23.940402  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:26.439369  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.494630  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495107  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has current primary IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495146  446736 main.go:141] libmachine: (no-preload-960512) Found IP for machine: 192.168.72.132
	I1030 19:46:27.495159  446736 main.go:141] libmachine: (no-preload-960512) Reserving static IP address...
	I1030 19:46:27.495588  446736 main.go:141] libmachine: (no-preload-960512) Reserved static IP address: 192.168.72.132
	I1030 19:46:27.495612  446736 main.go:141] libmachine: (no-preload-960512) Waiting for SSH to be available...
	I1030 19:46:27.495634  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.495664  446736 main.go:141] libmachine: (no-preload-960512) DBG | skip adding static IP to network mk-no-preload-960512 - found existing host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"}
	I1030 19:46:27.495678  446736 main.go:141] libmachine: (no-preload-960512) DBG | Getting to WaitForSSH function...
	I1030 19:46:27.497679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498051  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.498083  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498231  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH client type: external
	I1030 19:46:27.498273  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa (-rw-------)
	I1030 19:46:27.498316  446736 main.go:141] libmachine: (no-preload-960512) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:27.498344  446736 main.go:141] libmachine: (no-preload-960512) DBG | About to run SSH command:
	I1030 19:46:27.498355  446736 main.go:141] libmachine: (no-preload-960512) DBG | exit 0
	I1030 19:46:27.626476  446736 main.go:141] libmachine: (no-preload-960512) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:27.626850  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetConfigRaw
	I1030 19:46:27.627519  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:27.629913  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630288  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.630326  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630561  446736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/config.json ...
	I1030 19:46:27.630778  446736 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:27.630801  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:27.631021  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.633457  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.633849  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.633880  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.634032  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.634200  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634393  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.634741  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.634940  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.634952  446736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:27.743135  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:27.743167  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743475  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:46:27.743516  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743717  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.746369  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746726  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.746758  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746928  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.747114  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747266  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747380  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.747509  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.747740  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.747759  446736 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-960512 && echo "no-preload-960512" | sudo tee /etc/hostname
	I1030 19:46:27.872871  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-960512
	
	I1030 19:46:27.872899  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.875533  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.875867  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.875908  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.876072  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.876274  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876546  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876690  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.876851  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.877082  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.877099  446736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-960512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-960512/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-960512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:27.999551  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:27.999617  446736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:27.999654  446736 buildroot.go:174] setting up certificates
	I1030 19:46:27.999667  446736 provision.go:84] configureAuth start
	I1030 19:46:27.999689  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.999998  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.002874  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003285  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.003317  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003474  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.005987  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006376  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.006418  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006545  446736 provision.go:143] copyHostCerts
	I1030 19:46:28.006620  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:28.006639  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:28.006707  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:28.006846  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:28.006859  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:28.006898  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:28.006983  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:28.006993  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:28.007023  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:28.007102  446736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.no-preload-960512 san=[127.0.0.1 192.168.72.132 localhost minikube no-preload-960512]
	I1030 19:46:28.317424  446736 provision.go:177] copyRemoteCerts
	I1030 19:46:28.317502  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:28.317537  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.320089  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320387  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.320419  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.320776  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.320963  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.321116  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.409344  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:46:28.434874  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:28.459903  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:46:28.486949  446736 provision.go:87] duration metric: took 487.261556ms to configureAuth
	I1030 19:46:28.486981  446736 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:28.487219  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:28.487322  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.489873  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490180  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.490223  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490349  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.490561  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490719  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490827  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.491003  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.491199  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.491216  446736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:28.727045  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:28.727081  446736 machine.go:96] duration metric: took 1.096287528s to provisionDockerMachine
	I1030 19:46:28.727095  446736 start.go:293] postStartSetup for "no-preload-960512" (driver="kvm2")
	I1030 19:46:28.727106  446736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:28.727125  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.727460  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:28.727490  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.730071  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730445  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.730479  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730652  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.730858  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.731010  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.731197  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.817529  446736 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:28.822263  446736 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:28.822299  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:28.822394  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:28.822517  446736 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:28.822647  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:28.832488  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:28.858165  446736 start.go:296] duration metric: took 131.055053ms for postStartSetup
	I1030 19:46:28.858211  446736 fix.go:56] duration metric: took 23.84652817s for fixHost
	I1030 19:46:28.858235  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.861136  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861480  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.861513  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861819  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.862059  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862224  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862373  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.862582  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.862786  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.862797  446736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:28.975448  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317588.951806388
	
	I1030 19:46:28.975479  446736 fix.go:216] guest clock: 1730317588.951806388
	I1030 19:46:28.975489  446736 fix.go:229] Guest: 2024-10-30 19:46:28.951806388 +0000 UTC Remote: 2024-10-30 19:46:28.858215114 +0000 UTC m=+358.930371017 (delta=93.591274ms)
	I1030 19:46:28.975521  446736 fix.go:200] guest clock delta is within tolerance: 93.591274ms
	I1030 19:46:28.975529  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 23.963879546s
	I1030 19:46:28.975555  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.975849  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.978813  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979310  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.979341  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979608  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980197  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980429  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980522  446736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:28.980567  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.980682  446736 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:28.980710  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.984058  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984208  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984410  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984582  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984613  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984636  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984782  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.984798  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984966  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.984974  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.985121  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.985187  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.985260  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:29.063734  446736 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:29.087821  446736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:29.236289  446736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:29.242997  446736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:29.243088  446736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:29.260802  446736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:29.260836  446736 start.go:495] detecting cgroup driver to use...
	I1030 19:46:29.260930  446736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:29.279572  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:29.293359  446736 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:29.293423  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:29.306417  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:29.319617  446736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:29.440023  446736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:29.585541  446736 docker.go:233] disabling docker service ...
	I1030 19:46:29.585630  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:29.600459  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:29.613611  446736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:29.752666  446736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:29.880152  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:29.893912  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:29.913099  446736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:46:29.913160  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.923800  446736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:29.923882  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.934880  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.946088  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.956644  446736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:29.967199  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.978863  446736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.996225  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:30.006604  446736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:30.015954  446736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:30.016017  446736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:30.029194  446736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:30.041316  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:30.161438  446736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:30.257137  446736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:30.257209  446736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:30.261981  446736 start.go:563] Will wait 60s for crictl version
	I1030 19:46:30.262052  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.266275  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:30.305128  446736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:30.305228  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.335445  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.367026  446736 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:46:27.143768  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.644294  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.143819  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.643783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.144405  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.643941  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.644787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.143873  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.643857  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.982162  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:32.480878  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:28.939126  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.939780  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.368355  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:30.371260  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371651  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:30.371679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371922  446736 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:30.376282  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:30.389078  446736 kubeadm.go:883] updating cluster {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:30.389193  446736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:46:30.389228  446736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:30.423375  446736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:46:30.423402  446736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:30.423508  446736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.423562  446736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.423578  446736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.423595  446736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.423536  446736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.423634  446736 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424979  446736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.424988  446736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.424996  446736 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424987  446736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.425021  446736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.425036  446736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.425029  446736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.425061  446736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.612665  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.618602  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1030 19:46:30.636563  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.680808  446736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1030 19:46:30.680858  446736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.680911  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.749318  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.750405  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.751514  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.752746  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.768614  446736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1030 19:46:30.768663  446736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.768714  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.768723  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.881778  446736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1030 19:46:30.881811  446736 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1030 19:46:30.881821  446736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.881844  446736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.881862  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.881883  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.884827  446736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1030 19:46:30.884861  446736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.884901  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891812  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.891882  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.891907  446736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1030 19:46:30.891940  446736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.891981  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891986  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.892142  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.893781  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.992346  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.992372  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.992404  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.995602  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.995730  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.995786  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.123892  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1030 19:46:31.123996  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:31.124018  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.132177  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.132209  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:31.132311  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:31.132335  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.220011  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1030 19:46:31.220043  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220100  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220224  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1030 19:46:31.220329  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:31.262583  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1030 19:46:31.262685  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.262698  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:31.269015  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1030 19:46:31.269117  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:31.269710  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1030 19:46:31.269793  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:32.667341  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.216743  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.99661544s)
	I1030 19:46:33.216787  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1030 19:46:33.216787  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.996433716s)
	I1030 19:46:33.216820  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1030 19:46:33.216829  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216840  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.95412356s)
	I1030 19:46:33.216872  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1030 19:46:33.216884  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216925  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2: (1.954216284s)
	I1030 19:46:33.216964  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1030 19:46:33.216989  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.947854262s)
	I1030 19:46:33.217014  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1030 19:46:33.217027  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.947220506s)
	I1030 19:46:33.217040  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1030 19:46:33.217059  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:33.217140  446736 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1030 19:46:33.217178  446736 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.217222  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:32.144229  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.644079  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.643950  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.143888  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.643861  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.144210  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.644677  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.644549  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.481488  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:36.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:33.438659  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:37.440028  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.577178  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.360267806s)
	I1030 19:46:35.577219  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1030 19:46:35.577227  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.360144583s)
	I1030 19:46:35.577243  446736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.577252  446736 ssh_runner.go:235] Completed: which crictl: (2.360017291s)
	I1030 19:46:35.577259  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1030 19:46:35.577305  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:35.577309  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.615490  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492071  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.914649003s)
	I1030 19:46:39.492116  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1030 19:46:39.492142  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.876615301s)
	I1030 19:46:39.492211  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492148  446736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.492295  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.535258  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 19:46:39.535417  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:37.144681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.643833  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.143783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.644359  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.144745  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.644625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.144535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.643881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.144754  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.644070  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.302627  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.480981  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:39.940272  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:42.439827  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.566095  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.073767908s)
	I1030 19:46:41.566140  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1030 19:46:41.566167  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566169  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.030723752s)
	I1030 19:46:41.566210  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566224  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1030 19:46:43.628473  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.06223599s)
	I1030 19:46:43.628500  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1030 19:46:43.628525  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:43.628570  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:42.144672  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.644533  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.144320  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.644574  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.144465  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.644428  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.143785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.643767  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.144467  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.644496  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.481495  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.481844  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.982318  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:44.940061  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.439131  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.079808  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451207821s)
	I1030 19:46:45.079843  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1030 19:46:45.079870  446736 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:45.079918  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:46.026472  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 19:46:46.026538  446736 cache_images.go:123] Successfully loaded all cached images
	I1030 19:46:46.026547  446736 cache_images.go:92] duration metric: took 15.603128567s to LoadCachedImages
	I1030 19:46:46.026562  446736 kubeadm.go:934] updating node { 192.168.72.132 8443 v1.31.2 crio true true} ...
	I1030 19:46:46.026722  446736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-960512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:46.026819  446736 ssh_runner.go:195] Run: crio config
	I1030 19:46:46.080342  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:46.080367  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:46.080376  446736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:46.080399  446736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-960512 NodeName:no-preload-960512 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:46:46.080574  446736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-960512"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:46.080645  446736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:46:46.091323  446736 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:46.091400  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:46.100320  446736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1030 19:46:46.117369  446736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:46.133667  446736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1030 19:46:46.157251  446736 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:46.161543  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:46.173451  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:46.303532  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:46.321855  446736 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512 for IP: 192.168.72.132
	I1030 19:46:46.321883  446736 certs.go:194] generating shared ca certs ...
	I1030 19:46:46.321905  446736 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:46.322108  446736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:46.322171  446736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:46.322189  446736 certs.go:256] generating profile certs ...
	I1030 19:46:46.322294  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/client.key
	I1030 19:46:46.322381  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key.378d6029
	I1030 19:46:46.322436  446736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key
	I1030 19:46:46.322609  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:46.322649  446736 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:46.322661  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:46.322692  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:46.322727  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:46.322756  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:46.322812  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:46.323679  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:46.362339  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:46.396270  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:46.443482  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:46.468142  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:46:46.507418  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:46.534091  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:46.557105  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:46:46.579880  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:46.602665  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:46.625853  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:46.651685  446736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:46.670898  446736 ssh_runner.go:195] Run: openssl version
	I1030 19:46:46.677083  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:46.688814  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693349  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693399  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.699221  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:46.710200  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:46.721001  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725283  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725343  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.730798  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:46.741915  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:46.752767  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757109  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757150  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.762844  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:46.773796  446736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:46.778156  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:46.784099  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:46.789960  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:46.796056  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:46.801880  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:46.807680  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:46.813574  446736 kubeadm.go:392] StartCluster: {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:46.813694  446736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:46.813735  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.856225  446736 cri.go:89] found id: ""
	I1030 19:46:46.856309  446736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:46.866696  446736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:46.866721  446736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:46.866774  446736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:46.876622  446736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:46.877777  446736 kubeconfig.go:125] found "no-preload-960512" server: "https://192.168.72.132:8443"
	I1030 19:46:46.880116  446736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:46.889710  446736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.132
	I1030 19:46:46.889743  446736 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:46.889761  446736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:46.889837  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.927109  446736 cri.go:89] found id: ""
	I1030 19:46:46.927177  446736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:46.944519  446736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:46.954607  446736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:46.954626  446736 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:46.954669  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:46.963987  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:46.964086  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:46.973787  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:46.983447  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:46.983496  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:46.993101  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.003713  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:47.003773  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.013162  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:47.022411  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:47.022479  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:47.031878  446736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:47.041616  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:47.156846  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.637250  446736 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.480364831s)
	I1030 19:46:48.637284  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.836676  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.908664  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.987298  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:48.987411  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.488330  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.143932  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.644228  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.144124  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.643923  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.144466  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.643968  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.144811  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.643785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.144372  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.644019  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.983127  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.482250  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.939257  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.439840  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.988463  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.024092  446736 api_server.go:72] duration metric: took 1.036791371s to wait for apiserver process to appear ...
	I1030 19:46:50.024127  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:46:50.024155  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:50.024711  446736 api_server.go:269] stopped: https://192.168.72.132:8443/healthz: Get "https://192.168.72.132:8443/healthz": dial tcp 192.168.72.132:8443: connect: connection refused
	I1030 19:46:50.524543  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.757497  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:46:52.757537  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:46:52.757558  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.847598  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:52.847638  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.024885  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.030717  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.030749  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.524384  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.531420  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.531459  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.025006  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.030512  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.030545  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.525157  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.529426  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.529453  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.025276  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.029608  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.029634  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.525041  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.529303  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.529339  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:56.024906  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:56.029520  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:46:56.035579  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:46:56.035609  446736 api_server.go:131] duration metric: took 6.011468992s to wait for apiserver health ...
	I1030 19:46:56.035619  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:56.035625  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:56.037524  446736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:46:52.144732  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.644528  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.144074  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.643889  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.143976  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.644535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.144783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.644114  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.144728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.643846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.038963  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:46:56.050330  446736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:46:56.069509  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:46:56.079237  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:46:56.079268  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:46:56.079275  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:46:56.079283  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:46:56.079288  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:46:56.079294  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:46:56.079299  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:46:56.079304  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:46:56.079307  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:46:56.079313  446736 system_pods.go:74] duration metric: took 9.785027ms to wait for pod list to return data ...
	I1030 19:46:56.079327  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:46:56.082617  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:46:56.082644  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:46:56.082658  446736 node_conditions.go:105] duration metric: took 3.325744ms to run NodePressure ...
	I1030 19:46:56.082680  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:56.353123  446736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357714  446736 kubeadm.go:739] kubelet initialised
	I1030 19:46:56.357740  446736 kubeadm.go:740] duration metric: took 4.581883ms waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357755  446736 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:56.362687  446736 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.367124  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367153  446736 pod_ready.go:82] duration metric: took 4.443081ms for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.367165  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367180  446736 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.371747  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371774  446736 pod_ready.go:82] duration metric: took 4.580967ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.371785  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371794  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.375687  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375704  446736 pod_ready.go:82] duration metric: took 3.901023ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.375712  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375718  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.472995  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473036  446736 pod_ready.go:82] duration metric: took 97.300344ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.473047  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473056  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.873717  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873749  446736 pod_ready.go:82] duration metric: took 400.680615ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.873759  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873765  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.273361  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273392  446736 pod_ready.go:82] duration metric: took 399.61983ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.273405  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273415  446736 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.674201  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674236  446736 pod_ready.go:82] duration metric: took 400.809663ms for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.674251  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674260  446736 pod_ready.go:39] duration metric: took 1.31649331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:57.674285  446736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:46:57.687464  446736 ops.go:34] apiserver oom_adj: -16
	I1030 19:46:57.687489  446736 kubeadm.go:597] duration metric: took 10.820761471s to restartPrimaryControlPlane
	I1030 19:46:57.687498  446736 kubeadm.go:394] duration metric: took 10.873934509s to StartCluster
	I1030 19:46:57.687514  446736 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.687586  446736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:57.689255  446736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.689496  446736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:46:57.689574  446736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:46:57.689683  446736 addons.go:69] Setting storage-provisioner=true in profile "no-preload-960512"
	I1030 19:46:57.689706  446736 addons.go:234] Setting addon storage-provisioner=true in "no-preload-960512"
	I1030 19:46:57.689708  446736 addons.go:69] Setting metrics-server=true in profile "no-preload-960512"
	W1030 19:46:57.689719  446736 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:46:57.689727  446736 addons.go:234] Setting addon metrics-server=true in "no-preload-960512"
	W1030 19:46:57.689737  446736 addons.go:243] addon metrics-server should already be in state true
	I1030 19:46:57.689755  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689791  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:57.689761  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689707  446736 addons.go:69] Setting default-storageclass=true in profile "no-preload-960512"
	I1030 19:46:57.689912  446736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-960512"
	I1030 19:46:57.690245  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690258  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690264  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690297  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690303  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690322  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.691365  446736 out.go:177] * Verifying Kubernetes components...
	I1030 19:46:57.692941  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:57.727794  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1030 19:46:57.727877  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I1030 19:46:57.728127  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1030 19:46:57.728276  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728414  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728517  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728861  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.728879  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729032  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729053  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729056  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729064  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729350  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729429  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729452  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.730008  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730051  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.730124  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730362  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.731104  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.734295  446736 addons.go:234] Setting addon default-storageclass=true in "no-preload-960512"
	W1030 19:46:57.734316  446736 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:46:57.734349  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.734742  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.734810  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.747185  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1030 19:46:57.747680  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.748340  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.748360  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.748795  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.749029  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.749722  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I1030 19:46:57.750318  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.754616  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I1030 19:46:57.754666  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.755024  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.755052  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.755555  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.755672  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757159  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.757166  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.757184  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.757504  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757804  446736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:57.758045  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.758089  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.759001  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.759300  446736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:57.759313  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:46:57.759327  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.762134  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762557  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.762582  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762740  446736 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:46:54.485910  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.981415  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:54.939168  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.940263  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:57.762828  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.763037  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.763192  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.763344  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.763936  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:46:57.763953  446736 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:46:57.763970  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.766410  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.766771  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.766795  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.767034  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.767212  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.767385  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.767522  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.776037  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1030 19:46:57.776386  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.776846  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.776864  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.777184  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.777339  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.778829  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.779118  446736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:57.779138  446736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:46:57.779156  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.781325  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781590  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.781615  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781755  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.781895  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.781995  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.782088  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.895549  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:57.913030  446736 node_ready.go:35] waiting up to 6m0s for node "no-preload-960512" to be "Ready" ...
	I1030 19:46:58.008228  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:58.009206  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:46:58.009222  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:46:58.034347  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:58.036620  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:46:58.036646  446736 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:46:58.140489  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:58.140522  446736 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:46:58.181145  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:59.403246  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.368855241s)
	I1030 19:46:59.403317  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395049308s)
	I1030 19:46:59.403331  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403340  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403356  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403369  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403657  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403673  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403681  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403688  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403766  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403770  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.403778  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403790  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403796  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403939  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403954  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404023  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.404059  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404071  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411114  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.411136  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.411365  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411421  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.411437  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513065  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33186887s)
	I1030 19:46:59.513150  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513168  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513455  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513481  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513486  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513491  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513537  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513769  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513797  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513809  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513826  446736 addons.go:475] Verifying addon metrics-server=true in "no-preload-960512"
	I1030 19:46:59.516354  446736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:46:59.517886  446736 addons.go:510] duration metric: took 1.828322965s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:46:59.916839  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.143829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.644245  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.144327  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.644684  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.644799  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.144222  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.644111  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.144268  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.644631  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.982694  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:00.984014  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:59.439638  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:01.939460  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:02.416750  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:47:03.416443  446736 node_ready.go:49] node "no-preload-960512" has status "Ready":"True"
	I1030 19:47:03.416469  446736 node_ready.go:38] duration metric: took 5.503404181s for node "no-preload-960512" to be "Ready" ...
	I1030 19:47:03.416479  446736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:47:03.422219  446736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:02.143881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.644208  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.144411  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.643948  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.644179  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.144791  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.643983  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.143859  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.644436  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.481239  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.481271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.482108  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:04.439288  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:06.439454  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.428589  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.430975  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:09.928214  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.144765  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.644280  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.144381  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.644099  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.144129  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.643864  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.144105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.643752  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.144135  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.644172  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.982150  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.481265  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:08.939357  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.940087  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.430572  446736 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.430598  446736 pod_ready.go:82] duration metric: took 7.008352985s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.430610  446736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436673  446736 pod_ready.go:93] pod "etcd-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.436699  446736 pod_ready.go:82] duration metric: took 6.082545ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436711  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442262  446736 pod_ready.go:93] pod "kube-apiserver-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.442282  446736 pod_ready.go:82] duration metric: took 5.563816ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442292  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446170  446736 pod_ready.go:93] pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.446189  446736 pod_ready.go:82] duration metric: took 3.890123ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446198  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450190  446736 pod_ready.go:93] pod "kube-proxy-fxqqc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.450216  446736 pod_ready.go:82] duration metric: took 4.011125ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450226  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826537  446736 pod_ready.go:93] pod "kube-scheduler-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.826572  446736 pod_ready.go:82] duration metric: took 376.338504ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826587  446736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:12.834756  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.144391  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.644441  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.143916  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.644779  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.644634  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.144050  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.644738  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:16.143957  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:16.144037  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:16.184282  447486 cri.go:89] found id: ""
	I1030 19:47:16.184310  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.184320  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:16.184327  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:16.184403  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:16.225359  447486 cri.go:89] found id: ""
	I1030 19:47:16.225388  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.225397  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:16.225403  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:16.225471  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:16.260591  447486 cri.go:89] found id: ""
	I1030 19:47:16.260625  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.260635  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:16.260641  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:16.260695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:16.299562  447486 cri.go:89] found id: ""
	I1030 19:47:16.299591  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.299602  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:16.299609  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:16.299682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:16.334753  447486 cri.go:89] found id: ""
	I1030 19:47:16.334781  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.334789  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:16.334795  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:16.334877  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:16.371588  447486 cri.go:89] found id: ""
	I1030 19:47:16.371619  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.371628  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:16.371634  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:16.371689  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:16.406668  447486 cri.go:89] found id: ""
	I1030 19:47:16.406699  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.406710  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:16.406718  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:16.406786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:16.443050  447486 cri.go:89] found id: ""
	I1030 19:47:16.443081  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.443096  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:16.443109  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:16.443125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:16.492898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:16.492936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:16.506310  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:16.506343  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:16.637629  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:16.637660  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:16.637677  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:16.709581  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:16.709621  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:14.481660  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:16.981807  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:13.438777  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.439457  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.939606  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.335280  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.833216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.833320  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.253501  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:19.267200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:19.267276  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:19.303608  447486 cri.go:89] found id: ""
	I1030 19:47:19.303641  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.303651  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:19.303658  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:19.303711  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:19.341311  447486 cri.go:89] found id: ""
	I1030 19:47:19.341343  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.341354  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:19.341363  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:19.341427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:19.376949  447486 cri.go:89] found id: ""
	I1030 19:47:19.376977  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.376987  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:19.376996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:19.377075  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:19.414164  447486 cri.go:89] found id: ""
	I1030 19:47:19.414197  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.414209  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:19.414218  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:19.414308  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:19.450637  447486 cri.go:89] found id: ""
	I1030 19:47:19.450671  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.450683  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:19.450692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:19.450761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:19.485315  447486 cri.go:89] found id: ""
	I1030 19:47:19.485345  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.485355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:19.485364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:19.485427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:19.519873  447486 cri.go:89] found id: ""
	I1030 19:47:19.519901  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.519911  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:19.519919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:19.519982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:19.555168  447486 cri.go:89] found id: ""
	I1030 19:47:19.555198  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.555211  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:19.555223  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:19.555239  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:19.607227  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:19.607265  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:19.621465  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:19.621498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:19.700837  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:19.700869  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:19.700882  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:19.774428  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:19.774468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:18.982345  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:21.482165  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.940122  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.439405  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.333449  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.833942  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.314410  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:22.327998  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:22.328083  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:22.365583  447486 cri.go:89] found id: ""
	I1030 19:47:22.365611  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.365622  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:22.365631  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:22.365694  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:22.398964  447486 cri.go:89] found id: ""
	I1030 19:47:22.398996  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.399008  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:22.399016  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:22.399092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:22.435132  447486 cri.go:89] found id: ""
	I1030 19:47:22.435169  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.435181  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:22.435188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:22.435252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:22.471510  447486 cri.go:89] found id: ""
	I1030 19:47:22.471544  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.471557  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:22.471574  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:22.471630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:22.509611  447486 cri.go:89] found id: ""
	I1030 19:47:22.509639  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.509647  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:22.509653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:22.509707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:22.546502  447486 cri.go:89] found id: ""
	I1030 19:47:22.546539  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.546552  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:22.546560  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:22.546630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:22.584560  447486 cri.go:89] found id: ""
	I1030 19:47:22.584593  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.584605  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:22.584613  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:22.584676  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:22.621421  447486 cri.go:89] found id: ""
	I1030 19:47:22.621461  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.621474  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:22.621486  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:22.621505  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:22.634998  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:22.635038  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:22.711002  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:22.711028  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:22.711047  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:22.790673  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:22.790712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.831804  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:22.831851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.386915  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:25.399854  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:25.399954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:25.438346  447486 cri.go:89] found id: ""
	I1030 19:47:25.438381  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.438406  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:25.438416  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:25.438500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:25.474888  447486 cri.go:89] found id: ""
	I1030 19:47:25.474915  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.474924  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:25.474931  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:25.474994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:25.511925  447486 cri.go:89] found id: ""
	I1030 19:47:25.511955  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.511966  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:25.511973  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:25.512038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:25.551027  447486 cri.go:89] found id: ""
	I1030 19:47:25.551058  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.551067  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:25.551073  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:25.551144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:25.584736  447486 cri.go:89] found id: ""
	I1030 19:47:25.584764  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.584773  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:25.584779  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:25.584847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:25.632765  447486 cri.go:89] found id: ""
	I1030 19:47:25.632798  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.632810  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:25.632818  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:25.632893  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:25.682501  447486 cri.go:89] found id: ""
	I1030 19:47:25.682528  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.682536  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:25.682543  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:25.682591  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:25.728306  447486 cri.go:89] found id: ""
	I1030 19:47:25.728340  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.728352  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:25.728365  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:25.728397  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.781908  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:25.781944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:25.795864  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:25.795899  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:25.868350  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:25.868378  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:25.868392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:25.944244  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:25.944277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:23.981016  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:25.982186  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.942113  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.438568  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.333623  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.334460  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:28.488216  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:28.501481  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:28.501558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:28.536808  447486 cri.go:89] found id: ""
	I1030 19:47:28.536838  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.536849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:28.536857  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:28.536923  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:28.571819  447486 cri.go:89] found id: ""
	I1030 19:47:28.571855  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.571867  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:28.571885  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:28.571966  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:28.605532  447486 cri.go:89] found id: ""
	I1030 19:47:28.605571  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.605582  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:28.605610  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:28.605682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:28.642108  447486 cri.go:89] found id: ""
	I1030 19:47:28.642140  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.642152  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:28.642159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:28.642234  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:28.680036  447486 cri.go:89] found id: ""
	I1030 19:47:28.680065  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.680078  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:28.680086  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:28.680151  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.716135  447486 cri.go:89] found id: ""
	I1030 19:47:28.716162  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.716171  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:28.716177  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:28.716238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:28.752364  447486 cri.go:89] found id: ""
	I1030 19:47:28.752398  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.752406  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:28.752413  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:28.752478  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:28.788396  447486 cri.go:89] found id: ""
	I1030 19:47:28.788434  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.788447  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:28.788461  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:28.788476  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:28.841560  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:28.841595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:28.856134  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:28.856178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:28.930463  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:28.930507  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:28.930527  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:29.013743  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:29.013795  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:31.557942  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:31.573562  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:31.573654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:31.625349  447486 cri.go:89] found id: ""
	I1030 19:47:31.625378  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.625386  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:31.625392  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:31.625442  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:31.689536  447486 cri.go:89] found id: ""
	I1030 19:47:31.689566  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.689574  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:31.689581  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:31.689632  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:31.723758  447486 cri.go:89] found id: ""
	I1030 19:47:31.723794  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.723806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:31.723814  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:31.723890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:31.762671  447486 cri.go:89] found id: ""
	I1030 19:47:31.762698  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.762707  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:31.762713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:31.762761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:31.797658  447486 cri.go:89] found id: ""
	I1030 19:47:31.797686  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.797694  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:31.797702  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:31.797792  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.481158  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:30.981477  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:32.981593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.940019  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.833540  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.334678  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.832186  447486 cri.go:89] found id: ""
	I1030 19:47:31.832217  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.832228  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:31.832236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:31.832298  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:31.866820  447486 cri.go:89] found id: ""
	I1030 19:47:31.866853  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.866866  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:31.866875  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:31.866937  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:31.901888  447486 cri.go:89] found id: ""
	I1030 19:47:31.901913  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.901922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:31.901932  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:31.901944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:31.992343  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:31.992380  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:32.030519  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:32.030559  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:32.084442  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:32.084478  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:32.098919  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:32.098954  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:32.171034  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:34.671243  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:34.685879  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:34.685972  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:34.720657  447486 cri.go:89] found id: ""
	I1030 19:47:34.720686  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.720694  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:34.720700  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:34.720757  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:34.759571  447486 cri.go:89] found id: ""
	I1030 19:47:34.759602  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.759615  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:34.759624  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:34.759685  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:34.795273  447486 cri.go:89] found id: ""
	I1030 19:47:34.795313  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.795322  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:34.795329  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:34.795450  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:34.828999  447486 cri.go:89] found id: ""
	I1030 19:47:34.829035  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.829047  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:34.829054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:34.829158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:34.865620  447486 cri.go:89] found id: ""
	I1030 19:47:34.865661  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.865674  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:34.865682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:34.865753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:34.900768  447486 cri.go:89] found id: ""
	I1030 19:47:34.900801  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.900812  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:34.900820  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:34.900889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:34.945023  447486 cri.go:89] found id: ""
	I1030 19:47:34.945048  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.945057  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:34.945063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:34.945118  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:34.980458  447486 cri.go:89] found id: ""
	I1030 19:47:34.980483  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.980492  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:34.980501  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:34.980514  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:35.052570  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:35.052597  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:35.052613  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:35.133825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:35.133869  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:35.176016  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:35.176063  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:35.228866  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:35.228903  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:34.982702  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.481103  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.438712  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.938856  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.837275  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:39.332612  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.743408  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:37.757472  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:37.757547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:37.794818  447486 cri.go:89] found id: ""
	I1030 19:47:37.794847  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.794856  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:37.794862  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:37.794928  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:37.830025  447486 cri.go:89] found id: ""
	I1030 19:47:37.830064  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.830077  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:37.830086  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:37.830150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:37.864862  447486 cri.go:89] found id: ""
	I1030 19:47:37.864893  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.864902  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:37.864908  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:37.864958  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:37.901650  447486 cri.go:89] found id: ""
	I1030 19:47:37.901699  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.901713  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:37.901722  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:37.901780  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:37.935824  447486 cri.go:89] found id: ""
	I1030 19:47:37.935854  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.935862  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:37.935868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:37.935930  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:37.972774  447486 cri.go:89] found id: ""
	I1030 19:47:37.972805  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.972813  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:37.972819  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:37.972868  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:38.007815  447486 cri.go:89] found id: ""
	I1030 19:47:38.007845  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.007856  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:38.007864  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:38.007931  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:38.042525  447486 cri.go:89] found id: ""
	I1030 19:47:38.042559  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.042571  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:38.042584  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:38.042600  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:38.122022  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:38.122048  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:38.122065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:38.200534  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:38.200575  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:38.240118  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:38.240154  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:38.291936  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:38.291976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:40.806105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:40.821268  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:40.821343  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:40.857151  447486 cri.go:89] found id: ""
	I1030 19:47:40.857186  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.857198  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:40.857207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:40.857266  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:40.893603  447486 cri.go:89] found id: ""
	I1030 19:47:40.893639  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.893648  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:40.893654  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:40.893720  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:40.935294  447486 cri.go:89] found id: ""
	I1030 19:47:40.935330  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.935342  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:40.935349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:40.935418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:40.971509  447486 cri.go:89] found id: ""
	I1030 19:47:40.971536  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.971544  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:40.971550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:40.971610  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:41.009895  447486 cri.go:89] found id: ""
	I1030 19:47:41.009932  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.009941  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:41.009948  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:41.010008  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:41.045170  447486 cri.go:89] found id: ""
	I1030 19:47:41.045208  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.045221  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:41.045229  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:41.045288  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:41.077654  447486 cri.go:89] found id: ""
	I1030 19:47:41.077684  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.077695  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:41.077704  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:41.077771  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:41.111509  447486 cri.go:89] found id: ""
	I1030 19:47:41.111543  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.111552  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:41.111562  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:41.111574  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:41.164939  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:41.164976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:41.178512  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:41.178589  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:41.258783  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:41.258813  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:41.258832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:41.338192  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:41.338230  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:39.481210  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.481439  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:38.938987  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:40.941386  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.333705  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.833502  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.878155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:43.892376  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:43.892452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:43.930556  447486 cri.go:89] found id: ""
	I1030 19:47:43.930594  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.930606  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:43.930614  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:43.930679  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:43.970588  447486 cri.go:89] found id: ""
	I1030 19:47:43.970619  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.970630  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:43.970638  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:43.970706  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:44.005467  447486 cri.go:89] found id: ""
	I1030 19:47:44.005497  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.005506  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:44.005512  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:44.005573  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:44.039126  447486 cri.go:89] found id: ""
	I1030 19:47:44.039164  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.039173  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:44.039179  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:44.039239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:44.072961  447486 cri.go:89] found id: ""
	I1030 19:47:44.072994  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.073006  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:44.073014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:44.073109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:44.105864  447486 cri.go:89] found id: ""
	I1030 19:47:44.105891  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.105900  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:44.105907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:44.105956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:44.138198  447486 cri.go:89] found id: ""
	I1030 19:47:44.138240  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.138250  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:44.138264  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:44.138331  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:44.172529  447486 cri.go:89] found id: ""
	I1030 19:47:44.172558  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.172567  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:44.172577  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:44.172594  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:44.248215  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:44.248254  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:44.286169  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:44.286202  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:44.341129  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:44.341167  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:44.354570  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:44.354597  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:44.427790  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:43.481483  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.482271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.981312  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.440759  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.938783  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.940512  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.332448  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:48.333216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.928728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:46.943068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:46.943154  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:46.978385  447486 cri.go:89] found id: ""
	I1030 19:47:46.978416  447486 logs.go:282] 0 containers: []
	W1030 19:47:46.978428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:46.978436  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:46.978522  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:47.020413  447486 cri.go:89] found id: ""
	I1030 19:47:47.020457  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.020469  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:47.020476  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:47.020547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:47.061492  447486 cri.go:89] found id: ""
	I1030 19:47:47.061526  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.061538  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:47.061547  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:47.061611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:47.097621  447486 cri.go:89] found id: ""
	I1030 19:47:47.097659  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.097670  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:47.097679  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:47.097739  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:47.131740  447486 cri.go:89] found id: ""
	I1030 19:47:47.131769  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.131779  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:47.131785  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:47.131856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:47.167623  447486 cri.go:89] found id: ""
	I1030 19:47:47.167661  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.167674  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:47.167682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:47.167746  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:47.202299  447486 cri.go:89] found id: ""
	I1030 19:47:47.202328  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.202337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:47.202344  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:47.202401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:47.236652  447486 cri.go:89] found id: ""
	I1030 19:47:47.236686  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.236695  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:47.236704  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:47.236716  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:47.289700  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:47.289740  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:47.304929  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:47.304964  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:47.374811  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:47.374842  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:47.374858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:47.449161  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:47.449196  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:49.989730  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:50.002741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:50.002821  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:50.037602  447486 cri.go:89] found id: ""
	I1030 19:47:50.037636  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.037647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:50.037655  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:50.037724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:50.071346  447486 cri.go:89] found id: ""
	I1030 19:47:50.071383  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.071395  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:50.071405  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:50.071473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:50.106657  447486 cri.go:89] found id: ""
	I1030 19:47:50.106698  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.106711  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:50.106719  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:50.106783  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:50.140974  447486 cri.go:89] found id: ""
	I1030 19:47:50.141012  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.141025  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:50.141032  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:50.141105  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:50.177715  447486 cri.go:89] found id: ""
	I1030 19:47:50.177748  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.177756  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:50.177763  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:50.177824  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:50.212234  447486 cri.go:89] found id: ""
	I1030 19:47:50.212263  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.212272  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:50.212278  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:50.212337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:50.250791  447486 cri.go:89] found id: ""
	I1030 19:47:50.250826  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.250835  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:50.250842  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:50.250908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:50.288575  447486 cri.go:89] found id: ""
	I1030 19:47:50.288607  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.288615  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:50.288628  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:50.288643  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:50.358015  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:50.358039  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:50.358054  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:50.433194  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:50.433235  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:50.473485  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:50.473519  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:50.523581  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:50.523618  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:49.981614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:51.982079  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.439717  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.940170  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.333498  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.832848  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:54.833689  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:53.038393  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:53.052835  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:53.052910  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:53.088797  447486 cri.go:89] found id: ""
	I1030 19:47:53.088828  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.088837  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:53.088843  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:53.088897  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:53.124627  447486 cri.go:89] found id: ""
	I1030 19:47:53.124659  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.124668  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:53.124674  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:53.124724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:53.159127  447486 cri.go:89] found id: ""
	I1030 19:47:53.159163  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.159175  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:53.159183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:53.159244  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:53.191770  447486 cri.go:89] found id: ""
	I1030 19:47:53.191801  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.191810  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:53.191817  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:53.191885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:53.227727  447486 cri.go:89] found id: ""
	I1030 19:47:53.227761  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.227774  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:53.227781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:53.227842  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:53.262937  447486 cri.go:89] found id: ""
	I1030 19:47:53.262969  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.262981  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:53.262989  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:53.263060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:53.296070  447486 cri.go:89] found id: ""
	I1030 19:47:53.296113  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.296124  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:53.296133  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:53.296197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:53.332628  447486 cri.go:89] found id: ""
	I1030 19:47:53.332663  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.332674  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:53.332687  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:53.332702  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:53.385004  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:53.385046  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.400139  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:53.400185  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:53.477792  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:53.477826  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:53.477858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:53.553145  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:53.553186  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:56.094454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:56.107827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:56.107900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:56.141701  447486 cri.go:89] found id: ""
	I1030 19:47:56.141739  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.141751  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:56.141763  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:56.141831  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:56.179973  447486 cri.go:89] found id: ""
	I1030 19:47:56.180003  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.180016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:56.180023  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:56.180099  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:56.220456  447486 cri.go:89] found id: ""
	I1030 19:47:56.220486  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.220496  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:56.220503  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:56.220578  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:56.259699  447486 cri.go:89] found id: ""
	I1030 19:47:56.259727  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.259736  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:56.259741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:56.259791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:56.302726  447486 cri.go:89] found id: ""
	I1030 19:47:56.302762  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.302775  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:56.302783  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:56.302850  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:56.339791  447486 cri.go:89] found id: ""
	I1030 19:47:56.339819  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.339828  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:56.339834  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:56.339889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:56.381291  447486 cri.go:89] found id: ""
	I1030 19:47:56.381325  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.381337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:56.381345  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:56.381401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:56.417150  447486 cri.go:89] found id: ""
	I1030 19:47:56.417182  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.417194  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:56.417207  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:56.417227  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:56.466963  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:56.467005  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:56.481528  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:56.481557  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:56.554843  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:56.554872  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:56.554887  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:56.635798  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:56.635846  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:54.480601  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:56.481475  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:55.439618  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.940438  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.337314  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.179829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:59.193083  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:59.193160  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:59.231253  447486 cri.go:89] found id: ""
	I1030 19:47:59.231288  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.231302  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:59.231311  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:59.231382  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:59.265982  447486 cri.go:89] found id: ""
	I1030 19:47:59.266013  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.266022  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:59.266028  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:59.266090  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:59.303724  447486 cri.go:89] found id: ""
	I1030 19:47:59.303761  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.303773  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:59.303781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:59.303848  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:59.342137  447486 cri.go:89] found id: ""
	I1030 19:47:59.342163  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.342172  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:59.342180  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:59.342246  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:59.382652  447486 cri.go:89] found id: ""
	I1030 19:47:59.382684  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.382693  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:59.382700  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:59.382761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:59.422428  447486 cri.go:89] found id: ""
	I1030 19:47:59.422454  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.422463  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:59.422469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:59.422539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:59.464047  447486 cri.go:89] found id: ""
	I1030 19:47:59.464079  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.464089  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:59.464095  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:59.464146  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:59.500658  447486 cri.go:89] found id: ""
	I1030 19:47:59.500693  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.500705  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:59.500716  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:59.500732  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:59.554634  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:59.554679  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:59.567956  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:59.567986  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:59.646305  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:59.646332  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:59.646349  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:59.730008  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:59.730052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:58.486516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.982184  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.439220  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.439945  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:01.832883  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:03.834027  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.274141  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:02.287246  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:02.287320  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:02.322166  447486 cri.go:89] found id: ""
	I1030 19:48:02.322320  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.322336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:02.322346  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:02.322421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:02.358101  447486 cri.go:89] found id: ""
	I1030 19:48:02.358131  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.358140  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:02.358146  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:02.358209  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:02.394812  447486 cri.go:89] found id: ""
	I1030 19:48:02.394898  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.394915  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:02.394924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:02.394990  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:02.429128  447486 cri.go:89] found id: ""
	I1030 19:48:02.429165  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.429177  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:02.429186  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:02.429358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:02.465878  447486 cri.go:89] found id: ""
	I1030 19:48:02.465907  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.465915  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:02.465921  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:02.465973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:02.502758  447486 cri.go:89] found id: ""
	I1030 19:48:02.502794  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.502805  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:02.502813  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:02.502879  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:02.540111  447486 cri.go:89] found id: ""
	I1030 19:48:02.540142  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.540152  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:02.540158  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:02.540222  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:02.574728  447486 cri.go:89] found id: ""
	I1030 19:48:02.574762  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.574774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:02.574787  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:02.574804  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.613333  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:02.613374  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:02.664970  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:02.665013  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:02.679594  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:02.679626  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:02.744184  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:02.744208  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:02.744222  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.326826  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:05.340166  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:05.340232  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:05.376742  447486 cri.go:89] found id: ""
	I1030 19:48:05.376774  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.376789  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:05.376795  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:05.376865  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:05.413981  447486 cri.go:89] found id: ""
	I1030 19:48:05.414026  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.414039  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:05.414047  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:05.414121  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:05.449811  447486 cri.go:89] found id: ""
	I1030 19:48:05.449842  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.449854  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:05.449862  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:05.449925  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:05.502576  447486 cri.go:89] found id: ""
	I1030 19:48:05.502610  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.502622  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:05.502630  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:05.502721  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:05.536747  447486 cri.go:89] found id: ""
	I1030 19:48:05.536778  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.536787  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:05.536793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:05.536857  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:05.570308  447486 cri.go:89] found id: ""
	I1030 19:48:05.570335  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.570344  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:05.570353  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:05.570420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:05.605006  447486 cri.go:89] found id: ""
	I1030 19:48:05.605037  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.605048  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:05.605054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:05.605109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:05.638651  447486 cri.go:89] found id: ""
	I1030 19:48:05.638681  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.638693  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:05.638705  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:05.638720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:05.690734  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:05.690769  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:05.704561  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:05.704588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:05.779426  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:05.779448  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:05.779471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.866320  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:05.866355  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:03.481614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:05.482428  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.981875  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:04.939485  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.438925  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:06.334094  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.834525  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.409454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:08.423687  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:08.423767  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:08.463554  447486 cri.go:89] found id: ""
	I1030 19:48:08.463581  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.463591  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:08.463597  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:08.463654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:08.500159  447486 cri.go:89] found id: ""
	I1030 19:48:08.500186  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.500195  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:08.500200  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:08.500253  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:08.535670  447486 cri.go:89] found id: ""
	I1030 19:48:08.535701  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.535710  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:08.535717  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:08.535785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:08.572921  447486 cri.go:89] found id: ""
	I1030 19:48:08.572958  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.572968  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:08.572975  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:08.573052  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:08.610873  447486 cri.go:89] found id: ""
	I1030 19:48:08.610908  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.610918  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:08.610924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:08.610978  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:08.645430  447486 cri.go:89] found id: ""
	I1030 19:48:08.645458  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.645466  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:08.645475  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:08.645528  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:08.681212  447486 cri.go:89] found id: ""
	I1030 19:48:08.681246  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.681258  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:08.681266  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:08.681332  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:08.716619  447486 cri.go:89] found id: ""
	I1030 19:48:08.716651  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.716661  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:08.716671  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:08.716682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:08.794090  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:08.794134  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.833209  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:08.833251  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:08.884781  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:08.884817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:08.898556  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:08.898586  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:08.967713  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.468230  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:11.482593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:11.482660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:11.518191  447486 cri.go:89] found id: ""
	I1030 19:48:11.518225  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.518235  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:11.518242  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:11.518295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:11.557199  447486 cri.go:89] found id: ""
	I1030 19:48:11.557229  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.557237  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:11.557252  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:11.557323  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:11.595605  447486 cri.go:89] found id: ""
	I1030 19:48:11.595638  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.595650  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:11.595664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:11.595732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:11.634253  447486 cri.go:89] found id: ""
	I1030 19:48:11.634281  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.634295  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:11.634301  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:11.634358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:11.671138  447486 cri.go:89] found id: ""
	I1030 19:48:11.671167  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.671176  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:11.671183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:11.671238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:11.707202  447486 cri.go:89] found id: ""
	I1030 19:48:11.707228  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.707237  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:11.707243  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:11.707302  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:11.745514  447486 cri.go:89] found id: ""
	I1030 19:48:11.745549  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.745561  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:11.745570  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:11.745640  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:11.781403  447486 cri.go:89] found id: ""
	I1030 19:48:11.781438  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.781449  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:11.781458  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:11.781471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:10.486349  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:12.980881  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:09.440261  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.938439  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.332911  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.334382  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.832934  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:11.832972  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:11.853498  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:11.853545  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:11.949365  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.949389  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:11.949405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:12.033776  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:12.033823  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.579536  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:14.593497  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:14.593579  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:14.627853  447486 cri.go:89] found id: ""
	I1030 19:48:14.627886  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.627895  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:14.627902  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:14.627953  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:14.662356  447486 cri.go:89] found id: ""
	I1030 19:48:14.662386  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.662398  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:14.662406  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:14.662481  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:14.699334  447486 cri.go:89] found id: ""
	I1030 19:48:14.699370  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.699382  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:14.699390  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:14.699457  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:14.733884  447486 cri.go:89] found id: ""
	I1030 19:48:14.733924  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.733937  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:14.733946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:14.734025  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:14.775208  447486 cri.go:89] found id: ""
	I1030 19:48:14.775240  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.775249  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:14.775256  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:14.775315  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:14.809663  447486 cri.go:89] found id: ""
	I1030 19:48:14.809695  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.809704  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:14.809711  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:14.809778  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:14.844963  447486 cri.go:89] found id: ""
	I1030 19:48:14.844996  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.845006  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:14.845014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:14.845084  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:14.881236  447486 cri.go:89] found id: ""
	I1030 19:48:14.881273  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.881283  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:14.881293  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:14.881305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:14.933792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:14.933830  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:14.948038  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:14.948065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:15.023497  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:15.023519  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:15.023532  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:15.105682  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:15.105741  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.980949  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.981063  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.940399  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.438545  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:15.834158  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.332452  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:17.646238  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:17.665366  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:17.665455  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:17.707729  447486 cri.go:89] found id: ""
	I1030 19:48:17.707783  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.707796  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:17.707805  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:17.707883  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:17.759922  447486 cri.go:89] found id: ""
	I1030 19:48:17.759959  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.759972  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:17.759980  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:17.760049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:17.807635  447486 cri.go:89] found id: ""
	I1030 19:48:17.807671  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.807683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:17.807695  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:17.807770  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:17.844205  447486 cri.go:89] found id: ""
	I1030 19:48:17.844236  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.844247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:17.844255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:17.844326  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:17.879079  447486 cri.go:89] found id: ""
	I1030 19:48:17.879113  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.879125  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:17.879134  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:17.879202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:17.916548  447486 cri.go:89] found id: ""
	I1030 19:48:17.916584  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.916594  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:17.916601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:17.916654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:17.950597  447486 cri.go:89] found id: ""
	I1030 19:48:17.950626  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.950635  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:17.950640  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:17.950695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:17.985924  447486 cri.go:89] found id: ""
	I1030 19:48:17.985957  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.985968  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:17.985980  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:17.985996  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:18.066211  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:18.066250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:18.107228  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:18.107279  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:18.157508  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:18.157543  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.172208  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:18.172243  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:18.248100  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:20.748681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:20.763369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:20.763445  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:20.804288  447486 cri.go:89] found id: ""
	I1030 19:48:20.804323  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.804336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:20.804343  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:20.804410  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:20.838925  447486 cri.go:89] found id: ""
	I1030 19:48:20.838964  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.838973  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:20.838979  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:20.839030  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:20.873560  447486 cri.go:89] found id: ""
	I1030 19:48:20.873596  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.873608  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:20.873617  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:20.873681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:20.908670  447486 cri.go:89] found id: ""
	I1030 19:48:20.908705  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.908716  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:20.908723  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:20.908791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:20.945901  447486 cri.go:89] found id: ""
	I1030 19:48:20.945929  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.945937  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:20.945943  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:20.945991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:20.980184  447486 cri.go:89] found id: ""
	I1030 19:48:20.980216  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.980227  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:20.980236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:20.980299  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:21.024243  447486 cri.go:89] found id: ""
	I1030 19:48:21.024272  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.024284  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:21.024293  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:21.024366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:21.063315  447486 cri.go:89] found id: ""
	I1030 19:48:21.063348  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.063358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:21.063370  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:21.063387  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:21.130434  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:21.130463  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:21.130480  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:21.209067  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:21.209107  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:21.251005  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:21.251035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:21.303365  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:21.303402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.981952  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.982372  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.439921  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.939869  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.940058  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.333700  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.833845  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.834560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:23.817700  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:23.831060  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:23.831133  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:23.864299  447486 cri.go:89] found id: ""
	I1030 19:48:23.864334  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.864346  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:23.864354  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:23.864420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:23.900815  447486 cri.go:89] found id: ""
	I1030 19:48:23.900844  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.900854  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:23.900869  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:23.900929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:23.939888  447486 cri.go:89] found id: ""
	I1030 19:48:23.939917  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.939928  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:23.939936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:23.939999  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:23.975359  447486 cri.go:89] found id: ""
	I1030 19:48:23.975387  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.975395  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:23.975401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:23.975452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:24.012779  447486 cri.go:89] found id: ""
	I1030 19:48:24.012819  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.012832  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:24.012840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:24.012908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:24.048853  447486 cri.go:89] found id: ""
	I1030 19:48:24.048890  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.048903  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:24.048912  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:24.048979  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:24.084744  447486 cri.go:89] found id: ""
	I1030 19:48:24.084784  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.084797  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:24.084806  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:24.084860  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:24.121719  447486 cri.go:89] found id: ""
	I1030 19:48:24.121757  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.121767  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:24.121777  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:24.121791  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:24.178691  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:24.178733  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:24.192885  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:24.192916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:24.268771  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:24.268815  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:24.268832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:24.349663  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:24.349699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:23.481516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:25.481700  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.481886  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.940106  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.940309  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.334165  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.834162  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.887325  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:26.900480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:26.900558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:26.936157  447486 cri.go:89] found id: ""
	I1030 19:48:26.936188  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.936200  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:26.936207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:26.936278  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:26.975580  447486 cri.go:89] found id: ""
	I1030 19:48:26.975615  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.975626  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:26.975633  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:26.975705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:27.010549  447486 cri.go:89] found id: ""
	I1030 19:48:27.010579  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.010592  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:27.010600  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:27.010659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:27.047505  447486 cri.go:89] found id: ""
	I1030 19:48:27.047541  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.047553  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:27.047561  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:27.047628  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:27.083379  447486 cri.go:89] found id: ""
	I1030 19:48:27.083409  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.083420  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:27.083429  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:27.083492  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:27.117912  447486 cri.go:89] found id: ""
	I1030 19:48:27.117954  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.117967  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:27.117976  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:27.118049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:27.151721  447486 cri.go:89] found id: ""
	I1030 19:48:27.151749  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.151758  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:27.151765  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:27.151817  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:27.188940  447486 cri.go:89] found id: ""
	I1030 19:48:27.188981  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.188989  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:27.188999  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:27.189011  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:27.243926  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:27.243960  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:27.258702  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:27.258731  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:27.326983  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:27.327023  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:27.327041  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:27.410761  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:27.410808  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.953219  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:29.967972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:29.968078  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:30.003975  447486 cri.go:89] found id: ""
	I1030 19:48:30.004004  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.004014  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:30.004023  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:30.004097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:30.041732  447486 cri.go:89] found id: ""
	I1030 19:48:30.041768  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.041780  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:30.041787  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:30.041863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:30.078262  447486 cri.go:89] found id: ""
	I1030 19:48:30.078297  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.078308  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:30.078315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:30.078379  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:30.116100  447486 cri.go:89] found id: ""
	I1030 19:48:30.116137  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.116149  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:30.116157  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:30.116229  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:30.150925  447486 cri.go:89] found id: ""
	I1030 19:48:30.150953  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.150964  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:30.150972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:30.151041  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:30.192188  447486 cri.go:89] found id: ""
	I1030 19:48:30.192219  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.192230  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:30.192237  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:30.192314  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:30.231144  447486 cri.go:89] found id: ""
	I1030 19:48:30.231180  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.231192  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:30.231200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:30.231277  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:30.271198  447486 cri.go:89] found id: ""
	I1030 19:48:30.271228  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.271242  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:30.271265  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:30.271277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:30.322750  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:30.322792  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:30.337745  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:30.337774  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:30.417198  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:30.417224  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:30.417240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:30.503327  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:30.503364  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.982893  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.482051  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.440509  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:31.939517  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.333571  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.833482  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:33.047719  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:33.062330  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:33.062395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:33.101049  447486 cri.go:89] found id: ""
	I1030 19:48:33.101088  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.101101  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:33.101108  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:33.101175  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:33.135236  447486 cri.go:89] found id: ""
	I1030 19:48:33.135268  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.135279  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:33.135286  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:33.135357  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:33.169279  447486 cri.go:89] found id: ""
	I1030 19:48:33.169314  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.169325  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:33.169333  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:33.169401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:33.203336  447486 cri.go:89] found id: ""
	I1030 19:48:33.203380  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.203392  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:33.203401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:33.203470  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:33.238223  447486 cri.go:89] found id: ""
	I1030 19:48:33.238258  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.238270  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:33.238279  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:33.238345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:33.272891  447486 cri.go:89] found id: ""
	I1030 19:48:33.272925  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.272937  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:33.272946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:33.273014  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:33.312452  447486 cri.go:89] found id: ""
	I1030 19:48:33.312480  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.312489  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:33.312496  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:33.312547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:33.349041  447486 cri.go:89] found id: ""
	I1030 19:48:33.349076  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.349091  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:33.349104  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:33.349130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:33.430888  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:33.430940  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.469414  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:33.469444  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:33.518989  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:33.519022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:33.532656  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:33.532690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:33.605896  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.106207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:36.120564  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:36.120646  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:36.156854  447486 cri.go:89] found id: ""
	I1030 19:48:36.156887  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.156900  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:36.156909  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:36.156988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:36.195027  447486 cri.go:89] found id: ""
	I1030 19:48:36.195059  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.195072  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:36.195080  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:36.195150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:36.235639  447486 cri.go:89] found id: ""
	I1030 19:48:36.235672  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.235683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:36.235692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:36.235758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:36.281659  447486 cri.go:89] found id: ""
	I1030 19:48:36.281693  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.281702  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:36.281709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:36.281762  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:36.315427  447486 cri.go:89] found id: ""
	I1030 19:48:36.315454  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.315463  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:36.315469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:36.315531  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:36.353084  447486 cri.go:89] found id: ""
	I1030 19:48:36.353110  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.353120  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:36.353126  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:36.353197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:36.388497  447486 cri.go:89] found id: ""
	I1030 19:48:36.388533  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.388545  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:36.388553  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:36.388616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:36.423625  447486 cri.go:89] found id: ""
	I1030 19:48:36.423658  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.423667  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:36.423676  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:36.423691  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:36.476722  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:36.476757  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:36.490669  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:36.490700  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:36.558587  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.558621  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:36.558639  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:36.635606  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:36.635654  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:34.482414  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.981552  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.439796  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.938335  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:37.333231  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.333707  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.174007  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:39.187709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:39.187786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:39.226131  447486 cri.go:89] found id: ""
	I1030 19:48:39.226165  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.226177  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:39.226185  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:39.226265  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:39.265963  447486 cri.go:89] found id: ""
	I1030 19:48:39.266003  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.266016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:39.266024  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:39.266092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:39.302586  447486 cri.go:89] found id: ""
	I1030 19:48:39.302624  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.302637  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:39.302645  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:39.302710  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:39.347869  447486 cri.go:89] found id: ""
	I1030 19:48:39.347903  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.347916  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:39.347924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:39.347994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:39.384252  447486 cri.go:89] found id: ""
	I1030 19:48:39.384280  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.384288  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:39.384294  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:39.384347  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:39.418847  447486 cri.go:89] found id: ""
	I1030 19:48:39.418876  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.418885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:39.418891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:39.418950  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:39.458408  447486 cri.go:89] found id: ""
	I1030 19:48:39.458454  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.458467  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:39.458480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:39.458567  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:39.493889  447486 cri.go:89] found id: ""
	I1030 19:48:39.493923  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.493934  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:39.493946  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:39.493959  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:39.548692  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:39.548746  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:39.562083  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:39.562110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:39.633822  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:39.633845  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:39.633857  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:39.711765  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:39.711814  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:39.482010  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.981380  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:38.939254  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:40.940318  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.832456  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.832780  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:42.254337  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:42.268137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:42.268202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:42.303383  447486 cri.go:89] found id: ""
	I1030 19:48:42.303418  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.303428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:42.303434  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:42.303501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:42.349405  447486 cri.go:89] found id: ""
	I1030 19:48:42.349437  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.349447  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:42.349453  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:42.349504  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:42.384317  447486 cri.go:89] found id: ""
	I1030 19:48:42.384353  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.384363  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:42.384369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:42.384424  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:42.418712  447486 cri.go:89] found id: ""
	I1030 19:48:42.418759  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.418768  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:42.418775  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:42.418833  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:42.454234  447486 cri.go:89] found id: ""
	I1030 19:48:42.454270  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.454280  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:42.454288  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:42.454362  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:42.488813  447486 cri.go:89] found id: ""
	I1030 19:48:42.488845  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.488855  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:42.488863  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:42.488929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:42.525883  447486 cri.go:89] found id: ""
	I1030 19:48:42.525917  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.525929  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:42.525938  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:42.526006  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:42.561197  447486 cri.go:89] found id: ""
	I1030 19:48:42.561233  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.561246  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:42.561259  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:42.561275  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.599818  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:42.599854  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:42.654341  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:42.654382  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:42.668163  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:42.668188  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:42.739630  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:42.739659  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:42.739671  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.316154  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:45.330372  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:45.330454  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:45.369093  447486 cri.go:89] found id: ""
	I1030 19:48:45.369125  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.369135  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:45.369141  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:45.369192  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:45.407681  447486 cri.go:89] found id: ""
	I1030 19:48:45.407715  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.407726  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:45.407732  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:45.407787  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:45.444445  447486 cri.go:89] found id: ""
	I1030 19:48:45.444474  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.444482  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:45.444488  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:45.444539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:45.481538  447486 cri.go:89] found id: ""
	I1030 19:48:45.481570  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.481583  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:45.481591  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:45.481654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:45.515088  447486 cri.go:89] found id: ""
	I1030 19:48:45.515123  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.515132  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:45.515139  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:45.515195  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:45.550085  447486 cri.go:89] found id: ""
	I1030 19:48:45.550133  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.550145  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:45.550152  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:45.550214  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:45.583950  447486 cri.go:89] found id: ""
	I1030 19:48:45.583985  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.583999  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:45.584008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:45.584082  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:45.617320  447486 cri.go:89] found id: ""
	I1030 19:48:45.617349  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.617358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:45.617369  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:45.617389  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:45.668792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:45.668833  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:45.683144  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:45.683178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:45.758707  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:45.758732  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:45.758744  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.833807  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:45.833837  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:43.982806  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:46.480452  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.440702  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.938267  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:47.938396  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.833319  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.332420  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.374096  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:48.387812  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:48.387903  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:48.426958  447486 cri.go:89] found id: ""
	I1030 19:48:48.426987  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.426996  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:48.427002  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:48.427051  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:48.462216  447486 cri.go:89] found id: ""
	I1030 19:48:48.462249  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.462260  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:48.462268  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:48.462336  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:48.495666  447486 cri.go:89] found id: ""
	I1030 19:48:48.495699  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.495709  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:48.495716  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:48.495798  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:48.530653  447486 cri.go:89] found id: ""
	I1030 19:48:48.530686  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.530698  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:48.530709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:48.530777  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:48.564788  447486 cri.go:89] found id: ""
	I1030 19:48:48.564826  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.564838  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:48.564846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:48.564921  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:48.600735  447486 cri.go:89] found id: ""
	I1030 19:48:48.600772  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.600784  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:48.600793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:48.600863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:48.637063  447486 cri.go:89] found id: ""
	I1030 19:48:48.637095  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.637107  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:48.637115  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:48.637182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:48.673279  447486 cri.go:89] found id: ""
	I1030 19:48:48.673314  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.673334  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:48.673347  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:48.673362  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:48.724239  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:48.724280  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:48.738390  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:48.738425  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:48.812130  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:48.812155  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:48.812171  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:48.896253  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:48.896298  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.441155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:51.454675  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:51.454751  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:51.490464  447486 cri.go:89] found id: ""
	I1030 19:48:51.490511  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.490523  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:51.490532  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:51.490600  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:51.525364  447486 cri.go:89] found id: ""
	I1030 19:48:51.525399  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.525411  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:51.525419  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:51.525485  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:51.559028  447486 cri.go:89] found id: ""
	I1030 19:48:51.559062  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.559071  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:51.559078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:51.559139  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:51.595188  447486 cri.go:89] found id: ""
	I1030 19:48:51.595217  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.595225  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:51.595231  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:51.595300  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:51.628987  447486 cri.go:89] found id: ""
	I1030 19:48:51.629023  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.629039  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:51.629047  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:51.629119  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:51.663257  447486 cri.go:89] found id: ""
	I1030 19:48:51.663286  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.663295  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:51.663303  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:51.663368  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:51.712562  447486 cri.go:89] found id: ""
	I1030 19:48:51.712600  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.712613  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:51.712622  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:51.712684  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:51.761730  447486 cri.go:89] found id: ""
	I1030 19:48:51.761760  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.761769  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:51.761779  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:51.761794  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:51.775595  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:51.775624  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:48:48.481851  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.980723  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.982177  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:49.939273  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:51.939972  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.333451  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.333773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:54.835087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:48:51.849120  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:51.849144  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:51.849157  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:51.931364  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:51.931403  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.971195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:51.971229  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:54.525136  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:54.539137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:54.539227  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:54.574281  447486 cri.go:89] found id: ""
	I1030 19:48:54.574316  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.574339  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:54.574348  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:54.574420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:54.611109  447486 cri.go:89] found id: ""
	I1030 19:48:54.611149  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.611161  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:54.611170  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:54.611230  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:54.648396  447486 cri.go:89] found id: ""
	I1030 19:48:54.648428  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.648439  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:54.648447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:54.648510  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:54.683834  447486 cri.go:89] found id: ""
	I1030 19:48:54.683871  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.683884  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:54.683892  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:54.683954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:54.717391  447486 cri.go:89] found id: ""
	I1030 19:48:54.717421  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.717430  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:54.717436  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:54.717495  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:54.753783  447486 cri.go:89] found id: ""
	I1030 19:48:54.753812  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.753821  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:54.753827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:54.753878  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:54.788231  447486 cri.go:89] found id: ""
	I1030 19:48:54.788270  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.788282  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:54.788291  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:54.788359  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:54.823949  447486 cri.go:89] found id: ""
	I1030 19:48:54.823989  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.824001  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:54.824014  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:54.824052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:54.838936  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:54.838967  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:54.911785  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:54.911812  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:54.911825  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:54.993268  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:54.993302  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:55.032557  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:55.032588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:55.481330  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.482183  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:53.940343  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:56.439870  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.333262  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:59.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.588726  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:57.603010  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:57.603085  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:57.636499  447486 cri.go:89] found id: ""
	I1030 19:48:57.636531  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.636542  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:57.636551  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:57.636624  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:57.671698  447486 cri.go:89] found id: ""
	I1030 19:48:57.671728  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.671739  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:57.671748  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:57.671815  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:57.707387  447486 cri.go:89] found id: ""
	I1030 19:48:57.707414  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.707422  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:57.707431  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:57.707482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:57.745404  447486 cri.go:89] found id: ""
	I1030 19:48:57.745432  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.745440  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:57.745447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:57.745507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:57.784874  447486 cri.go:89] found id: ""
	I1030 19:48:57.784903  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.784912  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:57.784919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:57.784984  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:57.824663  447486 cri.go:89] found id: ""
	I1030 19:48:57.824697  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.824707  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:57.824713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:57.824773  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:57.862542  447486 cri.go:89] found id: ""
	I1030 19:48:57.862581  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.862593  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:57.862601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:57.862669  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:57.897901  447486 cri.go:89] found id: ""
	I1030 19:48:57.897935  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.897947  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:57.897959  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:57.897974  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.951898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:57.951936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:57.966282  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:57.966327  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:58.035515  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:58.035546  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:58.035562  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:58.114825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:58.114876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:00.705537  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:00.719589  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:00.719672  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:00.762299  447486 cri.go:89] found id: ""
	I1030 19:49:00.762330  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.762338  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:00.762356  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:00.762438  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:00.802228  447486 cri.go:89] found id: ""
	I1030 19:49:00.802259  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.802268  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:00.802275  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:00.802345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:00.836531  447486 cri.go:89] found id: ""
	I1030 19:49:00.836557  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.836565  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:00.836572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:00.836630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:00.869332  447486 cri.go:89] found id: ""
	I1030 19:49:00.869360  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.869369  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:00.869375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:00.869437  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:00.904643  447486 cri.go:89] found id: ""
	I1030 19:49:00.904675  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.904684  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:00.904691  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:00.904768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:00.939020  447486 cri.go:89] found id: ""
	I1030 19:49:00.939050  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.939061  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:00.939068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:00.939142  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:00.974586  447486 cri.go:89] found id: ""
	I1030 19:49:00.974625  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.974638  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:00.974646  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:00.974707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:01.009337  447486 cri.go:89] found id: ""
	I1030 19:49:01.009375  447486 logs.go:282] 0 containers: []
	W1030 19:49:01.009386  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:01.009399  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:01.009416  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:01.067087  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:01.067125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:01.081681  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:01.081713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:01.153057  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:01.153082  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:01.153096  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:01.236113  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:01.236153  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:59.981252  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.981799  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:58.938430  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:00.940905  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.333854  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.334325  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.774056  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:03.788395  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:03.788482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:03.823847  447486 cri.go:89] found id: ""
	I1030 19:49:03.823880  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.823892  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:03.823900  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:03.823973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:03.864776  447486 cri.go:89] found id: ""
	I1030 19:49:03.864807  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.864819  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:03.864827  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:03.864890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:03.912516  447486 cri.go:89] found id: ""
	I1030 19:49:03.912572  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.912585  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:03.912593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:03.912660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:03.962459  447486 cri.go:89] found id: ""
	I1030 19:49:03.962509  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.962521  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:03.962530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:03.962602  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:04.019107  447486 cri.go:89] found id: ""
	I1030 19:49:04.019143  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.019152  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:04.019159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:04.019217  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:04.054016  447486 cri.go:89] found id: ""
	I1030 19:49:04.054047  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.054056  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:04.054063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:04.054140  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:04.089907  447486 cri.go:89] found id: ""
	I1030 19:49:04.089938  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.089948  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:04.089955  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:04.090007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:04.128081  447486 cri.go:89] found id: ""
	I1030 19:49:04.128110  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.128118  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:04.128128  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:04.128142  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:04.182419  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:04.182462  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:04.196909  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:04.196941  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:04.267267  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:04.267298  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:04.267317  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:04.346826  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:04.346876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:03.984259  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.481362  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.438786  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.938707  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.939642  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.334541  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.834233  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.887266  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:06.902462  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:06.902554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:06.938850  447486 cri.go:89] found id: ""
	I1030 19:49:06.938880  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.938891  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:06.938899  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:06.938961  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:06.983284  447486 cri.go:89] found id: ""
	I1030 19:49:06.983315  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.983330  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:06.983339  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:06.983406  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:07.016332  447486 cri.go:89] found id: ""
	I1030 19:49:07.016359  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.016369  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:07.016375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:07.016428  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:07.051425  447486 cri.go:89] found id: ""
	I1030 19:49:07.051459  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.051471  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:07.051480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:07.051550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:07.083396  447486 cri.go:89] found id: ""
	I1030 19:49:07.083429  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.083437  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:07.083444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:07.083507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:07.116616  447486 cri.go:89] found id: ""
	I1030 19:49:07.116646  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.116654  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:07.116661  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:07.116728  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:07.149219  447486 cri.go:89] found id: ""
	I1030 19:49:07.149251  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.149259  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:07.149265  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:07.149318  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:07.188404  447486 cri.go:89] found id: ""
	I1030 19:49:07.188435  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.188444  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:07.188454  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:07.188468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:07.247600  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:07.247640  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:07.262196  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:07.262231  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:07.332998  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:07.333031  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:07.333048  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:07.415322  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:07.415367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:09.958278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:09.972983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:09.973068  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:10.016768  447486 cri.go:89] found id: ""
	I1030 19:49:10.016801  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.016810  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:10.016818  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:10.016885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:10.052958  447486 cri.go:89] found id: ""
	I1030 19:49:10.052992  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.053002  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:10.053009  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:10.053063  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:10.089062  447486 cri.go:89] found id: ""
	I1030 19:49:10.089094  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.089105  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:10.089120  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:10.089196  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:10.126084  447486 cri.go:89] found id: ""
	I1030 19:49:10.126114  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.126123  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:10.126130  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:10.126182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:10.171670  447486 cri.go:89] found id: ""
	I1030 19:49:10.171702  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.171712  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:10.171720  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:10.171785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:10.210243  447486 cri.go:89] found id: ""
	I1030 19:49:10.210285  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.210293  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:10.210300  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:10.210366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:10.253012  447486 cri.go:89] found id: ""
	I1030 19:49:10.253056  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.253069  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:10.253078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:10.253155  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:10.287948  447486 cri.go:89] found id: ""
	I1030 19:49:10.287999  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.288009  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:10.288021  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:10.288036  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:10.341362  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:10.341405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:10.355769  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:10.355798  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:10.429469  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:10.429500  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:10.429518  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:10.509812  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:10.509851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:08.488059  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.981606  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.982128  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.438903  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.939592  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.334087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.336238  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:14.833365  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:13.053064  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:13.069063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:13.069136  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:13.108457  447486 cri.go:89] found id: ""
	I1030 19:49:13.108492  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.108505  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:13.108513  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:13.108582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:13.146481  447486 cri.go:89] found id: ""
	I1030 19:49:13.146523  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.146534  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:13.146542  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:13.146595  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:13.187088  447486 cri.go:89] found id: ""
	I1030 19:49:13.187118  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.187129  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:13.187137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:13.187200  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:13.226913  447486 cri.go:89] found id: ""
	I1030 19:49:13.226948  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.226960  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:13.226968  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:13.227038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:13.262632  447486 cri.go:89] found id: ""
	I1030 19:49:13.262661  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.262669  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:13.262676  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:13.262726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:13.296877  447486 cri.go:89] found id: ""
	I1030 19:49:13.296906  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.296915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:13.296922  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:13.296983  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:13.334907  447486 cri.go:89] found id: ""
	I1030 19:49:13.334939  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.334949  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:13.334956  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:13.335021  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:13.369386  447486 cri.go:89] found id: ""
	I1030 19:49:13.369430  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.369443  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:13.369456  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:13.369472  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:13.423095  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:13.423130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:13.437039  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:13.437067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:13.512619  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:13.512648  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:13.512663  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:13.596982  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:13.597023  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:16.135623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:16.150407  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:16.150502  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:16.188771  447486 cri.go:89] found id: ""
	I1030 19:49:16.188811  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.188823  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:16.188832  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:16.188907  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:16.221554  447486 cri.go:89] found id: ""
	I1030 19:49:16.221589  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.221598  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:16.221604  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:16.221655  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:16.255567  447486 cri.go:89] found id: ""
	I1030 19:49:16.255595  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.255609  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:16.255616  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:16.255667  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:16.289820  447486 cri.go:89] found id: ""
	I1030 19:49:16.289855  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.289866  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:16.289874  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:16.289935  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:16.324415  447486 cri.go:89] found id: ""
	I1030 19:49:16.324449  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.324464  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:16.324471  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:16.324533  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:16.360789  447486 cri.go:89] found id: ""
	I1030 19:49:16.360825  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.360848  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:16.360856  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:16.360922  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:16.395066  447486 cri.go:89] found id: ""
	I1030 19:49:16.395093  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.395101  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:16.395107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:16.395158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:16.429220  447486 cri.go:89] found id: ""
	I1030 19:49:16.429261  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.429273  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:16.429286  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:16.429305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:16.481209  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:16.481250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:16.495353  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:16.495383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:16.563979  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:16.564006  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:16.564022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:16.645166  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:16.645205  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:15.481438  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.482846  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:15.440389  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.938724  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:16.833433  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.335773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.185478  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:19.199270  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:19.199337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:19.242426  447486 cri.go:89] found id: ""
	I1030 19:49:19.242455  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.242464  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:19.242474  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:19.242556  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:19.284061  447486 cri.go:89] found id: ""
	I1030 19:49:19.284092  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.284102  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:19.284108  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:19.284178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:19.317373  447486 cri.go:89] found id: ""
	I1030 19:49:19.317407  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.317420  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:19.317428  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:19.317491  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:19.354222  447486 cri.go:89] found id: ""
	I1030 19:49:19.354250  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.354259  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:19.354267  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:19.354329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:19.392948  447486 cri.go:89] found id: ""
	I1030 19:49:19.392980  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.392989  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:19.392996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:19.393053  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:19.438023  447486 cri.go:89] found id: ""
	I1030 19:49:19.438055  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.438066  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:19.438074  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:19.438144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:19.472179  447486 cri.go:89] found id: ""
	I1030 19:49:19.472208  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.472218  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:19.472226  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:19.472283  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:19.507164  447486 cri.go:89] found id: ""
	I1030 19:49:19.507195  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.507203  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:19.507213  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:19.507226  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:19.520898  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:19.520935  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:19.592204  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:19.592234  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:19.592263  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:19.668994  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:19.669045  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.707208  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:19.707240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:19.981085  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.981344  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.939994  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.439696  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.833592  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.333379  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.263035  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:22.276999  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:22.277089  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:22.310969  447486 cri.go:89] found id: ""
	I1030 19:49:22.311006  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.311017  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:22.311026  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:22.311097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:22.346282  447486 cri.go:89] found id: ""
	I1030 19:49:22.346311  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.346324  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:22.346332  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:22.346401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:22.384324  447486 cri.go:89] found id: ""
	I1030 19:49:22.384354  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.384372  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:22.384381  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:22.384441  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:22.419465  447486 cri.go:89] found id: ""
	I1030 19:49:22.419498  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.419509  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:22.419518  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:22.419582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:22.456161  447486 cri.go:89] found id: ""
	I1030 19:49:22.456196  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.456204  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:22.456211  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:22.456280  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:22.489075  447486 cri.go:89] found id: ""
	I1030 19:49:22.489102  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.489110  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:22.489119  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:22.489181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:22.521752  447486 cri.go:89] found id: ""
	I1030 19:49:22.521780  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.521789  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:22.521796  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:22.521847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:22.554946  447486 cri.go:89] found id: ""
	I1030 19:49:22.554985  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.554997  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:22.555010  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:22.555025  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:22.567877  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:22.567909  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:22.640062  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:22.640094  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:22.640110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:22.714946  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:22.714985  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:22.755560  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:22.755595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.306379  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:25.320883  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:25.320963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:25.356737  447486 cri.go:89] found id: ""
	I1030 19:49:25.356771  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.356782  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:25.356791  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:25.356856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:25.393371  447486 cri.go:89] found id: ""
	I1030 19:49:25.393409  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.393420  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:25.393429  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:25.393500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:25.428379  447486 cri.go:89] found id: ""
	I1030 19:49:25.428411  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.428425  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:25.428433  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:25.428505  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:25.473516  447486 cri.go:89] found id: ""
	I1030 19:49:25.473551  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.473562  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:25.473572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:25.473649  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:25.512508  447486 cri.go:89] found id: ""
	I1030 19:49:25.512535  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.512544  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:25.512550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:25.512611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:25.547646  447486 cri.go:89] found id: ""
	I1030 19:49:25.547691  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.547705  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:25.547713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:25.547782  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:25.582314  447486 cri.go:89] found id: ""
	I1030 19:49:25.582347  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.582356  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:25.582364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:25.582415  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:25.617305  447486 cri.go:89] found id: ""
	I1030 19:49:25.617343  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.617354  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:25.617367  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:25.617383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:25.658245  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:25.658283  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.710559  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:25.710598  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:25.724961  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:25.724995  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:25.796252  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:25.796283  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:25.796300  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:23.984899  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:25.985999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.939599  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:27.440032  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:26.334407  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.334588  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.374633  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:28.389468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:28.389549  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:28.425747  447486 cri.go:89] found id: ""
	I1030 19:49:28.425780  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.425792  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:28.425800  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:28.425956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:28.465221  447486 cri.go:89] found id: ""
	I1030 19:49:28.465258  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.465291  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:28.465303  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:28.465371  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:28.504184  447486 cri.go:89] found id: ""
	I1030 19:49:28.504217  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.504230  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:28.504240  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:28.504295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:28.536198  447486 cri.go:89] found id: ""
	I1030 19:49:28.536234  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.536247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:28.536255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:28.536340  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:28.572194  447486 cri.go:89] found id: ""
	I1030 19:49:28.572228  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.572240  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:28.572248  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:28.572312  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:28.608794  447486 cri.go:89] found id: ""
	I1030 19:49:28.608826  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.608838  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:28.608846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:28.608914  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:28.641664  447486 cri.go:89] found id: ""
	I1030 19:49:28.641698  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.641706  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:28.641714  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:28.641768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:28.675756  447486 cri.go:89] found id: ""
	I1030 19:49:28.675790  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.675800  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:28.675812  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:28.675829  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:28.690203  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:28.690237  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:28.755647  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:28.755674  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:28.755690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.837116  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:28.837149  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:28.877195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:28.877232  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.428091  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:31.442537  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:31.442619  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:31.479911  447486 cri.go:89] found id: ""
	I1030 19:49:31.479942  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.479953  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:31.479961  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:31.480029  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:31.517015  447486 cri.go:89] found id: ""
	I1030 19:49:31.517042  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.517050  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:31.517056  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:31.517107  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:31.549858  447486 cri.go:89] found id: ""
	I1030 19:49:31.549891  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.549900  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:31.549907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:31.549971  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:31.583490  447486 cri.go:89] found id: ""
	I1030 19:49:31.583524  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.583536  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:31.583551  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:31.583618  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:31.618270  447486 cri.go:89] found id: ""
	I1030 19:49:31.618308  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.618320  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:31.618328  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:31.618397  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:31.655416  447486 cri.go:89] found id: ""
	I1030 19:49:31.655448  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.655460  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:31.655468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:31.655530  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:31.689708  447486 cri.go:89] found id: ""
	I1030 19:49:31.689740  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.689751  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:31.689759  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:31.689823  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:31.724179  447486 cri.go:89] found id: ""
	I1030 19:49:31.724208  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.724219  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:31.724233  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:31.724249  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.774900  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:31.774939  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:31.788606  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:31.788635  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:28.481673  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.980999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:32.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:29.938506  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:31.940276  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.834322  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:33.333091  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:49:31.861360  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:31.861385  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:31.861398  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:31.935856  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:31.935896  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.477313  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:34.491530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:34.491597  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:34.525105  447486 cri.go:89] found id: ""
	I1030 19:49:34.525136  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.525145  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:34.525153  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:34.525215  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:34.560449  447486 cri.go:89] found id: ""
	I1030 19:49:34.560483  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.560495  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:34.560503  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:34.560558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:34.595278  447486 cri.go:89] found id: ""
	I1030 19:49:34.595325  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.595335  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:34.595342  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:34.595395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:34.628486  447486 cri.go:89] found id: ""
	I1030 19:49:34.628521  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.628533  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:34.628542  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:34.628614  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:34.663410  447486 cri.go:89] found id: ""
	I1030 19:49:34.663438  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.663448  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:34.663456  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:34.663520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:34.697053  447486 cri.go:89] found id: ""
	I1030 19:49:34.697086  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.697099  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:34.697107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:34.697178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:34.730910  447486 cri.go:89] found id: ""
	I1030 19:49:34.730943  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.730955  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:34.730963  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:34.731034  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:34.765725  447486 cri.go:89] found id: ""
	I1030 19:49:34.765762  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.765774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:34.765786  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:34.765807  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.802750  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:34.802786  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:34.853576  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:34.853614  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:34.868102  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:34.868139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:34.939985  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:34.940015  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:34.940027  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:35.480658  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.481068  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:34.442576  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:36.940088  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:35.333400  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.334425  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.833330  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.516479  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:37.529386  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:37.529453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:37.565889  447486 cri.go:89] found id: ""
	I1030 19:49:37.565923  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.565936  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:37.565945  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:37.566007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:37.598771  447486 cri.go:89] found id: ""
	I1030 19:49:37.598801  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.598811  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:37.598817  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:37.598869  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:37.632678  447486 cri.go:89] found id: ""
	I1030 19:49:37.632705  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.632714  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:37.632735  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:37.632795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:37.666642  447486 cri.go:89] found id: ""
	I1030 19:49:37.666673  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.666682  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:37.666688  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:37.666748  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:37.701203  447486 cri.go:89] found id: ""
	I1030 19:49:37.701233  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.701242  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:37.701249  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:37.701324  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:37.735614  447486 cri.go:89] found id: ""
	I1030 19:49:37.735649  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.735661  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:37.735669  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:37.735738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:37.771381  447486 cri.go:89] found id: ""
	I1030 19:49:37.771418  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.771430  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:37.771439  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:37.771501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:37.807870  447486 cri.go:89] found id: ""
	I1030 19:49:37.807908  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.807922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:37.807935  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:37.807952  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:37.860334  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:37.860367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:37.874340  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:37.874371  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:37.952874  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:37.952903  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:37.952916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:38.045318  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:38.045356  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:40.591278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:40.604970  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:40.605050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:40.639839  447486 cri.go:89] found id: ""
	I1030 19:49:40.639869  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.639880  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:40.639889  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:40.639952  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:40.674046  447486 cri.go:89] found id: ""
	I1030 19:49:40.674077  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.674087  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:40.674093  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:40.674164  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:40.710759  447486 cri.go:89] found id: ""
	I1030 19:49:40.710794  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.710806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:40.710815  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:40.710880  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:40.752439  447486 cri.go:89] found id: ""
	I1030 19:49:40.752471  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.752484  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:40.752493  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:40.752548  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:40.787985  447486 cri.go:89] found id: ""
	I1030 19:49:40.788021  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.788034  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:40.788042  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:40.788102  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:40.829282  447486 cri.go:89] found id: ""
	I1030 19:49:40.829320  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.829333  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:40.829341  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:40.829409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:40.863911  447486 cri.go:89] found id: ""
	I1030 19:49:40.863944  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.863953  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:40.863959  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:40.864026  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:40.901239  447486 cri.go:89] found id: ""
	I1030 19:49:40.901275  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.901287  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:40.901300  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:40.901321  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:40.955283  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:40.955323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:40.968733  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:40.968766  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:41.040213  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:41.040242  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:41.040256  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:41.125992  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:41.126035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:39.481593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.483403  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.441009  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.939182  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.834082  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:44.332428  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.667949  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:43.681633  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:43.681705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:43.725038  447486 cri.go:89] found id: ""
	I1030 19:49:43.725076  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.725085  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:43.725091  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:43.725149  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.761438  447486 cri.go:89] found id: ""
	I1030 19:49:43.761473  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.761486  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:43.761494  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:43.761566  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:43.795299  447486 cri.go:89] found id: ""
	I1030 19:49:43.795335  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.795347  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:43.795355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:43.795431  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:43.830545  447486 cri.go:89] found id: ""
	I1030 19:49:43.830582  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.830594  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:43.830601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:43.830670  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:43.867632  447486 cri.go:89] found id: ""
	I1030 19:49:43.867664  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.867676  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:43.867684  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:43.867753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:43.901315  447486 cri.go:89] found id: ""
	I1030 19:49:43.901346  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.901355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:43.901361  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:43.901412  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:43.934928  447486 cri.go:89] found id: ""
	I1030 19:49:43.934963  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.934975  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:43.934983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:43.935048  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:43.975407  447486 cri.go:89] found id: ""
	I1030 19:49:43.975441  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.975451  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:43.975472  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:43.975497  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:44.019281  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:44.019310  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:44.072363  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:44.072402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:44.085508  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:44.085538  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:44.159634  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:44.159666  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:44.159682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:46.739662  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:46.753190  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:46.753252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:46.790167  447486 cri.go:89] found id: ""
	I1030 19:49:46.790202  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.790211  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:46.790217  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:46.790272  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.988689  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.481139  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.939246  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.438847  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.333066  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.335463  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.828187  447486 cri.go:89] found id: ""
	I1030 19:49:46.828221  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.828230  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:46.828237  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:46.828305  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:46.865499  447486 cri.go:89] found id: ""
	I1030 19:49:46.865539  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.865551  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:46.865559  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:46.865612  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:46.899591  447486 cri.go:89] found id: ""
	I1030 19:49:46.899616  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.899625  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:46.899632  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:46.899681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:46.934818  447486 cri.go:89] found id: ""
	I1030 19:49:46.934850  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.934860  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:46.934868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:46.934933  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:46.971298  447486 cri.go:89] found id: ""
	I1030 19:49:46.971328  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.971340  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:46.971349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:46.971418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:47.010783  447486 cri.go:89] found id: ""
	I1030 19:49:47.010814  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.010825  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:47.010832  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:47.010896  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:47.044343  447486 cri.go:89] found id: ""
	I1030 19:49:47.044380  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.044392  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:47.044405  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:47.044421  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:47.094425  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:47.094459  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:47.110339  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:47.110368  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:47.183262  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:47.183290  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:47.183305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:47.262611  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:47.262651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:49.808195  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:49.821889  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:49.821963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:49.857296  447486 cri.go:89] found id: ""
	I1030 19:49:49.857339  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.857351  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:49.857359  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:49.857413  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:49.892614  447486 cri.go:89] found id: ""
	I1030 19:49:49.892648  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.892660  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:49.892668  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:49.892732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:49.929835  447486 cri.go:89] found id: ""
	I1030 19:49:49.929862  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.929871  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:49.929878  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:49.929940  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:49.965341  447486 cri.go:89] found id: ""
	I1030 19:49:49.965371  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.965379  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:49.965392  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:49.965449  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:50.000134  447486 cri.go:89] found id: ""
	I1030 19:49:50.000165  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.000177  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:50.000188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:50.000259  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:50.033848  447486 cri.go:89] found id: ""
	I1030 19:49:50.033876  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.033885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:50.033891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:50.033943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:50.073315  447486 cri.go:89] found id: ""
	I1030 19:49:50.073344  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.073354  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:50.073360  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:50.073421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:50.114232  447486 cri.go:89] found id: ""
	I1030 19:49:50.114266  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.114277  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:50.114290  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:50.114311  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:50.185407  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:50.185434  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:50.185448  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:50.270447  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:50.270494  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:50.308825  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:50.308855  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:50.363376  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:50.363417  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:48.982027  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:51.482972  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.439801  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.939120  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.833062  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.833132  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.834352  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.878475  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:52.892013  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:52.892088  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:52.928085  447486 cri.go:89] found id: ""
	I1030 19:49:52.928117  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.928126  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:52.928132  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:52.928185  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:52.963377  447486 cri.go:89] found id: ""
	I1030 19:49:52.963413  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.963426  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:52.963434  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:52.963493  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:53.000799  447486 cri.go:89] found id: ""
	I1030 19:49:53.000825  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.000834  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:53.000840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:53.000912  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:53.037429  447486 cri.go:89] found id: ""
	I1030 19:49:53.037463  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.037472  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:53.037478  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:53.037534  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:53.072392  447486 cri.go:89] found id: ""
	I1030 19:49:53.072425  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.072433  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:53.072446  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:53.072520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:53.108925  447486 cri.go:89] found id: ""
	I1030 19:49:53.108957  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.108970  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:53.108978  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:53.109050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:53.145409  447486 cri.go:89] found id: ""
	I1030 19:49:53.145445  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.145457  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:53.145466  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:53.145536  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:53.180756  447486 cri.go:89] found id: ""
	I1030 19:49:53.180784  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.180793  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:53.180803  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:53.180817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:53.234960  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:53.235010  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:53.249224  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:53.249255  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:53.313223  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:53.313245  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:53.313264  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:53.399715  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:53.399758  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.944332  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:55.961546  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:55.961616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:56.020603  447486 cri.go:89] found id: ""
	I1030 19:49:56.020634  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.020647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:56.020654  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:56.020725  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:56.065134  447486 cri.go:89] found id: ""
	I1030 19:49:56.065162  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.065170  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:56.065176  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:56.065239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:56.101358  447486 cri.go:89] found id: ""
	I1030 19:49:56.101386  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.101396  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:56.101405  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:56.101473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:56.135762  447486 cri.go:89] found id: ""
	I1030 19:49:56.135795  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.135805  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:56.135811  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:56.135863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:56.171336  447486 cri.go:89] found id: ""
	I1030 19:49:56.171371  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.171383  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:56.171391  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:56.171461  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:56.205643  447486 cri.go:89] found id: ""
	I1030 19:49:56.205674  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.205685  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:56.205693  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:56.205759  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:56.240853  447486 cri.go:89] found id: ""
	I1030 19:49:56.240885  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.240894  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:56.240901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:56.240973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:56.276577  447486 cri.go:89] found id: ""
	I1030 19:49:56.276612  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.276623  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:56.276636  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:56.276651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:56.328180  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:56.328220  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:56.341895  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:56.341923  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:56.414492  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:56.414523  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:56.414540  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:56.498439  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:56.498498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:53.980916  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:55.983077  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:53.439070  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.940107  446887 pod_ready.go:82] duration metric: took 4m0.007533629s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:49:54.940137  446887 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:49:54.940149  446887 pod_ready.go:39] duration metric: took 4m6.552777198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:49:54.940170  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:49:54.940206  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:54.940264  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:54.992682  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:54.992715  446887 cri.go:89] found id: ""
	I1030 19:49:54.992727  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:54.992790  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:54.997251  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:54.997313  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:55.034504  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.034542  446887 cri.go:89] found id: ""
	I1030 19:49:55.034552  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:55.034616  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.039551  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:55.039624  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:55.083294  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.083326  446887 cri.go:89] found id: ""
	I1030 19:49:55.083336  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:55.083407  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.087866  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:55.087932  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:55.125250  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.125353  446887 cri.go:89] found id: ""
	I1030 19:49:55.125372  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:55.125446  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.130688  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:55.130747  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:55.168792  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.168814  446887 cri.go:89] found id: ""
	I1030 19:49:55.168822  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:55.168877  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.173360  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:55.173424  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:55.209566  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.209590  446887 cri.go:89] found id: ""
	I1030 19:49:55.209599  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:55.209659  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.214190  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:55.214263  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:55.257056  446887 cri.go:89] found id: ""
	I1030 19:49:55.257091  446887 logs.go:282] 0 containers: []
	W1030 19:49:55.257103  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:55.257111  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:55.257165  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:55.300194  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.300224  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.300229  446887 cri.go:89] found id: ""
	I1030 19:49:55.300238  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:55.300290  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.304750  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.309249  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:49:55.309276  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.363959  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:49:55.363994  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.412667  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:49:55.412703  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.455381  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:55.455420  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.494657  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:55.494689  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.552740  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:55.552773  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:55.627724  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:55.627765  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:55.642263  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:49:55.642300  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:55.691079  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:55.691111  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.730111  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:49:55.730151  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.785155  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:55.785189  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:55.924592  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:55.924633  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.970229  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:55.970267  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:57.333378  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.334394  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.039071  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.053648  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.053722  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.097620  447486 cri.go:89] found id: ""
	I1030 19:49:59.097650  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.097661  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:59.097669  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.097738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.139136  447486 cri.go:89] found id: ""
	I1030 19:49:59.139176  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.139188  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:59.139199  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.139270  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.180322  447486 cri.go:89] found id: ""
	I1030 19:49:59.180361  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.180371  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:59.180384  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.180453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.217374  447486 cri.go:89] found id: ""
	I1030 19:49:59.217422  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.217434  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:59.217443  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.217498  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.257857  447486 cri.go:89] found id: ""
	I1030 19:49:59.257884  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.257894  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:59.257901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.257968  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.297679  447486 cri.go:89] found id: ""
	I1030 19:49:59.297713  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.297724  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:59.297733  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.297795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.341469  447486 cri.go:89] found id: ""
	I1030 19:49:59.341499  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.341509  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.341517  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:59.341587  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:59.381677  447486 cri.go:89] found id: ""
	I1030 19:49:59.381704  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.381713  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:59.381723  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.381735  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.441396  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.441428  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.457105  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:59.457139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:59.532023  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:59.532051  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.532064  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:59.621685  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:59.621720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:58.481425  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:00.481912  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.482130  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.010542  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.027463  446887 api_server.go:72] duration metric: took 4m17.923507495s to wait for apiserver process to appear ...
	I1030 19:49:59.027488  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:49:59.027524  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.027571  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.066364  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:59.066391  446887 cri.go:89] found id: ""
	I1030 19:49:59.066401  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:59.066463  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.072454  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.072535  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.118043  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:59.118072  446887 cri.go:89] found id: ""
	I1030 19:49:59.118081  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:59.118142  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.122806  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.122883  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.167475  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:59.167500  446887 cri.go:89] found id: ""
	I1030 19:49:59.167511  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:59.167577  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.172181  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.172255  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.210384  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:59.210411  446887 cri.go:89] found id: ""
	I1030 19:49:59.210419  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:59.210473  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.216032  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.216114  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.269770  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.269791  446887 cri.go:89] found id: ""
	I1030 19:49:59.269799  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:59.269851  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.274161  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.274239  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.313907  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.313936  446887 cri.go:89] found id: ""
	I1030 19:49:59.313946  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:59.314019  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.320687  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.320766  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.367710  446887 cri.go:89] found id: ""
	I1030 19:49:59.367740  446887 logs.go:282] 0 containers: []
	W1030 19:49:59.367752  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.367759  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:59.367826  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:59.422716  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.422744  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.422750  446887 cri.go:89] found id: ""
	I1030 19:49:59.422763  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:59.422827  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.428399  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.432404  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:59.432429  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.475798  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.475839  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.548960  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.548998  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.566839  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:59.566870  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.606181  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:59.606210  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.670134  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:59.670170  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.709224  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.709253  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:00.132147  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:00.132194  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:00.181124  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:00.181171  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:00.306545  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:00.306585  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:00.352129  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:00.352169  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:00.398083  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:00.398119  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:00.439813  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:00.439851  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:02.978477  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:50:02.983776  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:50:02.984791  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:50:02.984814  446887 api_server.go:131] duration metric: took 3.957319689s to wait for apiserver health ...
	I1030 19:50:02.984822  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:50:02.984844  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.984902  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:03.024715  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:03.024745  446887 cri.go:89] found id: ""
	I1030 19:50:03.024754  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:50:03.024820  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.029121  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:03.029188  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:03.064462  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:03.064489  446887 cri.go:89] found id: ""
	I1030 19:50:03.064500  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:50:03.064564  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.068587  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:03.068665  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:03.106880  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.106902  446887 cri.go:89] found id: ""
	I1030 19:50:03.106910  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:50:03.106978  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.111313  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:03.111388  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:03.155761  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:03.155791  446887 cri.go:89] found id: ""
	I1030 19:50:03.155801  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:50:03.155864  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.160616  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:03.160686  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:03.199028  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:03.199063  446887 cri.go:89] found id: ""
	I1030 19:50:03.199074  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:50:03.199149  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.203348  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:03.203414  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:03.257739  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:03.257769  446887 cri.go:89] found id: ""
	I1030 19:50:03.257780  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:50:03.257845  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.263357  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:03.263417  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:03.309752  446887 cri.go:89] found id: ""
	I1030 19:50:03.309779  446887 logs.go:282] 0 containers: []
	W1030 19:50:03.309787  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:03.309793  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:50:03.309843  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:50:03.351570  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.351593  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.351597  446887 cri.go:89] found id: ""
	I1030 19:50:03.351605  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:50:03.351656  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.364414  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.369070  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:03.369097  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:03.385129  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:03.385161  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:01.833117  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:04.334645  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.170623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:02.184885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.184975  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:02.223811  447486 cri.go:89] found id: ""
	I1030 19:50:02.223841  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.223849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:02.223856  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:02.223908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:02.260454  447486 cri.go:89] found id: ""
	I1030 19:50:02.260481  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.260491  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:02.260497  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:02.260554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:02.296542  447486 cri.go:89] found id: ""
	I1030 19:50:02.296569  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.296577  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:02.296583  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:02.296631  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:02.332168  447486 cri.go:89] found id: ""
	I1030 19:50:02.332199  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.332211  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:02.332219  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:02.332287  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:02.366539  447486 cri.go:89] found id: ""
	I1030 19:50:02.366575  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.366586  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:02.366595  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:02.366659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:02.401859  447486 cri.go:89] found id: ""
	I1030 19:50:02.401894  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.401915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:02.401923  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:02.401991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:02.446061  447486 cri.go:89] found id: ""
	I1030 19:50:02.446097  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.446108  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:02.446116  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:02.446181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:02.488233  447486 cri.go:89] found id: ""
	I1030 19:50:02.488257  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.488265  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:02.488274  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:02.488294  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:02.544517  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:02.544554  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:02.558143  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:02.558179  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:02.628679  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:02.628706  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:02.628723  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:02.710246  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:02.710293  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.254846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:05.269536  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:05.269599  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:05.303724  447486 cri.go:89] found id: ""
	I1030 19:50:05.303753  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.303761  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:05.303767  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:05.303819  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:05.339268  447486 cri.go:89] found id: ""
	I1030 19:50:05.339301  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.339322  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:05.339330  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:05.339405  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:05.375892  447486 cri.go:89] found id: ""
	I1030 19:50:05.375923  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.375930  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:05.375936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:05.375988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:05.413197  447486 cri.go:89] found id: ""
	I1030 19:50:05.413232  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.413243  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:05.413252  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:05.413329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:05.452095  447486 cri.go:89] found id: ""
	I1030 19:50:05.452122  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.452130  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:05.452137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:05.452193  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:05.490694  447486 cri.go:89] found id: ""
	I1030 19:50:05.490731  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.490744  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:05.490753  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:05.490808  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:05.523961  447486 cri.go:89] found id: ""
	I1030 19:50:05.523992  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.524001  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:05.524008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:05.524060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:05.558631  447486 cri.go:89] found id: ""
	I1030 19:50:05.558664  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.558673  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:05.558684  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:05.558699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.596929  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:05.596958  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:05.647294  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:05.647332  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:05.661349  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:05.661377  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:05.730268  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:05.730299  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:05.730323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.434675  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:03.434708  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.474767  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:50:03.474803  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.510301  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:03.510331  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.887871  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:50:03.887912  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.930529  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:03.930563  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:03.971064  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:03.971102  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:04.040593  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:04.040632  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:04.157377  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:04.157418  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:04.205779  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:04.205816  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:04.251434  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:50:04.251470  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:04.288713  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:50:04.288747  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:06.849298  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:50:06.849329  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.849334  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.849340  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.849352  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.849358  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.849367  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.849373  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.849377  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.849384  446887 system_pods.go:74] duration metric: took 3.864557334s to wait for pod list to return data ...
	I1030 19:50:06.849394  446887 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:50:06.852015  446887 default_sa.go:45] found service account: "default"
	I1030 19:50:06.852037  446887 default_sa.go:55] duration metric: took 2.63686ms for default service account to be created ...
	I1030 19:50:06.852046  446887 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:50:06.856920  446887 system_pods.go:86] 8 kube-system pods found
	I1030 19:50:06.856945  446887 system_pods.go:89] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.856953  446887 system_pods.go:89] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.856959  446887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.856966  446887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.856972  446887 system_pods.go:89] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.856979  446887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.856996  446887 system_pods.go:89] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.857005  446887 system_pods.go:89] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.857015  446887 system_pods.go:126] duration metric: took 4.962745ms to wait for k8s-apps to be running ...
	I1030 19:50:06.857025  446887 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:50:06.857086  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:06.874176  446887 system_svc.go:56] duration metric: took 17.144628ms WaitForService to wait for kubelet
	I1030 19:50:06.874206  446887 kubeadm.go:582] duration metric: took 4m25.770253397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:50:06.874230  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:50:06.876962  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:50:06.876987  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:50:06.877004  446887 node_conditions.go:105] duration metric: took 2.768174ms to run NodePressure ...
	I1030 19:50:06.877025  446887 start.go:241] waiting for startup goroutines ...
	I1030 19:50:06.877034  446887 start.go:246] waiting for cluster config update ...
	I1030 19:50:06.877070  446887 start.go:255] writing updated cluster config ...
	I1030 19:50:06.877355  446887 ssh_runner.go:195] Run: rm -f paused
	I1030 19:50:06.927147  446887 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:50:06.929103  446887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-768989" cluster and "default" namespace by default
	I1030 19:50:04.981923  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.982630  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.834029  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.834616  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.312167  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:08.327121  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:08.327206  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:08.364871  447486 cri.go:89] found id: ""
	I1030 19:50:08.364905  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.364916  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:08.364924  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:08.364982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:08.399179  447486 cri.go:89] found id: ""
	I1030 19:50:08.399215  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.399225  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:08.399231  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:08.399286  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:08.434308  447486 cri.go:89] found id: ""
	I1030 19:50:08.434340  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.434350  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:08.434356  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:08.434409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:08.477152  447486 cri.go:89] found id: ""
	I1030 19:50:08.477184  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.477193  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:08.477204  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:08.477274  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:08.513678  447486 cri.go:89] found id: ""
	I1030 19:50:08.513706  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.513716  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:08.513725  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:08.513789  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:08.551427  447486 cri.go:89] found id: ""
	I1030 19:50:08.551459  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.551478  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:08.551485  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:08.551550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:08.584224  447486 cri.go:89] found id: ""
	I1030 19:50:08.584260  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.584272  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:08.584282  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:08.584351  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:08.617603  447486 cri.go:89] found id: ""
	I1030 19:50:08.617638  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.617649  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:08.617660  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:08.617674  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:08.694201  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:08.694229  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:08.694247  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.775457  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:08.775500  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:08.816452  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:08.816496  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:08.868077  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:08.868114  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.383130  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:11.397672  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:11.397758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:11.431923  447486 cri.go:89] found id: ""
	I1030 19:50:11.431959  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.431971  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:11.431980  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:11.432050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:11.466959  447486 cri.go:89] found id: ""
	I1030 19:50:11.466996  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.467009  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:11.467018  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:11.467093  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:11.506399  447486 cri.go:89] found id: ""
	I1030 19:50:11.506425  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.506437  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:11.506444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:11.506529  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:11.538606  447486 cri.go:89] found id: ""
	I1030 19:50:11.538635  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.538643  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:11.538649  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:11.538700  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:11.573265  447486 cri.go:89] found id: ""
	I1030 19:50:11.573296  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.573304  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:11.573310  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:11.573364  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:11.608522  447486 cri.go:89] found id: ""
	I1030 19:50:11.608549  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.608558  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:11.608569  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:11.608629  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:11.639758  447486 cri.go:89] found id: ""
	I1030 19:50:11.639784  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.639792  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:11.639797  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:11.639846  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:11.673381  447486 cri.go:89] found id: ""
	I1030 19:50:11.673414  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.673426  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:11.673439  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:11.673454  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:11.727368  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:11.727414  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.741267  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:11.741301  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:09.481159  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.483339  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.334468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:13.832615  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:50:11.808126  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:11.808158  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:11.808174  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:11.888676  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:11.888713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.431637  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:14.445315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:14.445392  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:14.482059  447486 cri.go:89] found id: ""
	I1030 19:50:14.482097  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.482110  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:14.482118  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:14.482186  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:14.520802  447486 cri.go:89] found id: ""
	I1030 19:50:14.520834  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.520843  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:14.520849  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:14.520900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:14.559965  447486 cri.go:89] found id: ""
	I1030 19:50:14.559996  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.560006  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:14.560012  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:14.560062  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:14.601831  447486 cri.go:89] found id: ""
	I1030 19:50:14.601865  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.601875  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:14.601881  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:14.601932  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:14.635307  447486 cri.go:89] found id: ""
	I1030 19:50:14.635339  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.635348  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:14.635355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:14.635418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:14.668618  447486 cri.go:89] found id: ""
	I1030 19:50:14.668648  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.668657  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:14.668664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:14.668726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:14.702597  447486 cri.go:89] found id: ""
	I1030 19:50:14.702633  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.702644  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:14.702653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:14.702715  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:14.736860  447486 cri.go:89] found id: ""
	I1030 19:50:14.736899  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.736911  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:14.736925  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:14.736942  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:14.822015  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:14.822060  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.860153  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:14.860195  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:14.912230  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:14.912269  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:14.927032  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:14.927067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:14.994401  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:13.975124  446965 pod_ready.go:82] duration metric: took 4m0.000158179s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	E1030 19:50:13.975173  446965 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" (will not retry!)
	I1030 19:50:13.975201  446965 pod_ready.go:39] duration metric: took 4m14.686087419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:13.975238  446965 kubeadm.go:597] duration metric: took 4m22.157012059s to restartPrimaryControlPlane
	W1030 19:50:13.975313  446965 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:13.975366  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:15.833986  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.835468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.494865  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:17.509934  447486 kubeadm.go:597] duration metric: took 4m3.074434895s to restartPrimaryControlPlane
	W1030 19:50:17.510016  447486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:17.510051  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:18.496415  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:18.512328  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:18.522293  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:18.532752  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:18.532772  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:18.532823  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:18.542501  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:18.542560  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:18.552660  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:18.562585  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:18.562649  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:18.572321  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.581633  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:18.581689  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.592770  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:18.602414  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:18.602477  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:18.612334  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:18.844753  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:20.333715  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:22.832817  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:24.833349  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:27.332723  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:29.335009  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:31.832584  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:33.834506  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:36.333902  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:38.833159  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:40.157555  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.182163055s)
	I1030 19:50:40.157637  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:40.174413  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:40.184817  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:40.195446  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:40.195475  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:40.195527  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:40.205509  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:40.205575  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:40.217343  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:40.227666  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:40.227729  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:40.237594  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.247151  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:40.247209  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.256854  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:40.266306  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:40.266379  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:40.276409  446965 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:40.322080  446965 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 19:50:40.322174  446965 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:50:40.433056  446965 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:50:40.433251  446965 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:50:40.433390  446965 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 19:50:40.445085  446965 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:50:40.447192  446965 out.go:235]   - Generating certificates and keys ...
	I1030 19:50:40.447301  446965 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:50:40.447395  446965 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:50:40.447512  446965 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:50:40.447600  446965 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:50:40.447735  446965 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:50:40.447825  446965 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:50:40.447912  446965 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:50:40.447999  446965 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:50:40.448108  446965 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:50:40.448208  446965 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:50:40.448266  446965 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:50:40.448345  446965 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:50:40.590735  446965 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:50:40.714139  446965 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 19:50:40.808334  446965 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:50:40.940687  446965 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:50:41.085266  446965 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:50:41.085840  446965 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:50:41.088415  446965 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:50:41.090229  446965 out.go:235]   - Booting up control plane ...
	I1030 19:50:41.090349  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:50:41.090466  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:50:41.090573  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:50:41.112262  446965 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:50:41.118809  446965 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:50:41.118919  446965 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:50:41.243915  446965 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 19:50:41.244093  446965 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 19:50:41.745362  446965 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.630697ms
	I1030 19:50:41.745513  446965 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 19:50:40.834005  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:42.834286  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:46.748431  446965 kubeadm.go:310] [api-check] The API server is healthy after 5.001587935s
	I1030 19:50:46.762271  446965 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 19:50:46.781785  446965 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 19:50:46.806338  446965 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 19:50:46.806613  446965 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-042402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 19:50:46.819762  446965 kubeadm.go:310] [bootstrap-token] Using token: k711fn.1we2gia9o31jm3ip
	I1030 19:50:46.821026  446965 out.go:235]   - Configuring RBAC rules ...
	I1030 19:50:46.821137  446965 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 19:50:46.827537  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 19:50:46.836653  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 19:50:46.844891  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 19:50:46.848423  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 19:50:46.851674  446965 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 19:50:47.157946  446965 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 19:50:47.615774  446965 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 19:50:48.154429  446965 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 19:50:48.159547  446965 kubeadm.go:310] 
	I1030 19:50:48.159636  446965 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 19:50:48.159648  446965 kubeadm.go:310] 
	I1030 19:50:48.159762  446965 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 19:50:48.159776  446965 kubeadm.go:310] 
	I1030 19:50:48.159806  446965 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 19:50:48.159880  446965 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 19:50:48.159934  446965 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 19:50:48.159944  446965 kubeadm.go:310] 
	I1030 19:50:48.160029  446965 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 19:50:48.160040  446965 kubeadm.go:310] 
	I1030 19:50:48.160123  446965 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 19:50:48.160154  446965 kubeadm.go:310] 
	I1030 19:50:48.160242  446965 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 19:50:48.160351  446965 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 19:50:48.160440  446965 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 19:50:48.160450  446965 kubeadm.go:310] 
	I1030 19:50:48.160570  446965 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 19:50:48.160652  446965 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 19:50:48.160660  446965 kubeadm.go:310] 
	I1030 19:50:48.160729  446965 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.160818  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 19:50:48.160838  446965 kubeadm.go:310] 	--control-plane 
	I1030 19:50:48.160846  446965 kubeadm.go:310] 
	I1030 19:50:48.160943  446965 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 19:50:48.160955  446965 kubeadm.go:310] 
	I1030 19:50:48.161065  446965 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.161205  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 19:50:48.162302  446965 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:48.162390  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:50:48.162408  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:50:48.164041  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:50:45.333255  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:47.334686  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:49.832993  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:48.165318  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:50:48.176702  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:50:48.199681  446965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:50:48.199776  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.199840  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-042402 minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=embed-certs-042402 minikube.k8s.io/primary=true
	I1030 19:50:48.226617  446965 ops.go:34] apiserver oom_adj: -16
	I1030 19:50:48.404620  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.905366  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.405663  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.904925  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.405082  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.905099  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.404860  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.905534  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.405432  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.905289  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:53.010770  446965 kubeadm.go:1113] duration metric: took 4.811061462s to wait for elevateKubeSystemPrivileges
	I1030 19:50:53.010818  446965 kubeadm.go:394] duration metric: took 5m1.251362756s to StartCluster
	I1030 19:50:53.010849  446965 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.010948  446965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:50:53.012997  446965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.013284  446965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:50:53.013411  446965 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:50:53.013518  446965 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-042402"
	I1030 19:50:53.013539  446965 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-042402"
	I1030 19:50:53.013539  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1030 19:50:53.013550  446965 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:50:53.013600  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013546  446965 addons.go:69] Setting default-storageclass=true in profile "embed-certs-042402"
	I1030 19:50:53.013605  446965 addons.go:69] Setting metrics-server=true in profile "embed-certs-042402"
	I1030 19:50:53.013635  446965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-042402"
	I1030 19:50:53.013642  446965 addons.go:234] Setting addon metrics-server=true in "embed-certs-042402"
	W1030 19:50:53.013650  446965 addons.go:243] addon metrics-server should already be in state true
	I1030 19:50:53.013675  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013947  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014005  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014010  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014022  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014058  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014112  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.015033  446965 out.go:177] * Verifying Kubernetes components...
	I1030 19:50:53.016527  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:50:53.030033  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I1030 19:50:53.030290  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1030 19:50:53.030618  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.030733  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.031192  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031209  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031342  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031356  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031577  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.031773  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.031801  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.032289  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1030 19:50:53.032910  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.032953  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.033170  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.033684  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.033699  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.035082  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.035104  446965 addons.go:234] Setting addon default-storageclass=true in "embed-certs-042402"
	W1030 19:50:53.035124  446965 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:50:53.035158  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.035461  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.035492  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.036666  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.036697  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.054685  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1030 19:50:53.055271  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.055621  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I1030 19:50:53.055762  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.055779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.056073  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.056192  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.056410  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.056665  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.056688  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.057099  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.057693  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.057741  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.058427  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.058756  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I1030 19:50:53.059684  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.060230  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.060253  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.060597  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.060806  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.060880  446965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:50:53.062367  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.062469  446965 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.062506  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:50:53.062526  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.063955  446965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:50:53.065131  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:50:53.065153  446965 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:50:53.065173  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.065987  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066607  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.066640  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066723  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.066956  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.067102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.067254  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.068475  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.068916  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.068939  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.069098  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.069288  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.069457  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.069625  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.075920  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1030 19:50:53.076341  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.076758  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.076779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.077042  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.077238  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.078809  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.079065  446965 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.079088  446965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:50:53.079105  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.081873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082309  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.082339  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082515  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.082705  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.082863  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.083061  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.274313  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:50:53.305281  446965 node_ready.go:35] waiting up to 6m0s for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313184  446965 node_ready.go:49] node "embed-certs-042402" has status "Ready":"True"
	I1030 19:50:53.313217  446965 node_ready.go:38] duration metric: took 7.892097ms for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313230  446965 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:53.321668  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:50:53.406960  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.427287  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:50:53.427324  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:50:53.475089  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.485983  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:50:53.486013  446965 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:50:53.570871  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:53.570904  446965 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:50:53.670898  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:54.545328  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.138329529s)
	I1030 19:50:54.545384  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545383  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.070259573s)
	I1030 19:50:54.545399  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545426  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545445  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545732  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545748  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545757  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545761  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545765  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545787  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545794  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545802  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545808  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.546139  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546162  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.546465  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.546468  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546507  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.576380  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.576408  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.576738  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.576787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.576804  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.703670  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032714873s)
	I1030 19:50:54.703724  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.703736  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704025  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.704059  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704076  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704085  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.704104  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704350  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704362  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704374  446965 addons.go:475] Verifying addon metrics-server=true in "embed-certs-042402"
	I1030 19:50:54.706330  446965 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:50:51.833654  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.333879  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.707723  446965 addons.go:510] duration metric: took 1.694322523s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:50:55.328470  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:57.828224  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:56.832967  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:58.833284  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:59.828636  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:01.828151  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.828178  446965 pod_ready.go:82] duration metric: took 8.506481998s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.828187  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833094  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.833121  446965 pod_ready.go:82] duration metric: took 4.926401ms for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833133  446965 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837391  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.837410  446965 pod_ready.go:82] duration metric: took 4.27047ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837419  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344200  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.344224  446965 pod_ready.go:82] duration metric: took 506.798667ms for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344233  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349020  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.349042  446965 pod_ready.go:82] duration metric: took 4.801739ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349055  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626109  446965 pod_ready.go:93] pod "kube-proxy-m9zwz" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.626137  446965 pod_ready.go:82] duration metric: took 277.074567ms for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626146  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027456  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:03.027482  446965 pod_ready.go:82] duration metric: took 401.329277ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027493  446965 pod_ready.go:39] duration metric: took 9.714247169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:03.027513  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:03.027579  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:03.043403  446965 api_server.go:72] duration metric: took 10.030078869s to wait for apiserver process to appear ...
	I1030 19:51:03.043431  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:03.043456  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:51:03.048722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:51:03.049572  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:03.049595  446965 api_server.go:131] duration metric: took 6.156928ms to wait for apiserver health ...
	I1030 19:51:03.049603  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:03.233170  446965 system_pods.go:59] 9 kube-system pods found
	I1030 19:51:03.233205  446965 system_pods.go:61] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.233212  446965 system_pods.go:61] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.233217  446965 system_pods.go:61] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.233222  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.233227  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.233231  446965 system_pods.go:61] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.233236  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.233247  446965 system_pods.go:61] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.233255  446965 system_pods.go:61] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.233272  446965 system_pods.go:74] duration metric: took 183.660307ms to wait for pod list to return data ...
	I1030 19:51:03.233287  446965 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:03.427520  446965 default_sa.go:45] found service account: "default"
	I1030 19:51:03.427550  446965 default_sa.go:55] duration metric: took 194.254547ms for default service account to be created ...
	I1030 19:51:03.427562  446965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:03.629316  446965 system_pods.go:86] 9 kube-system pods found
	I1030 19:51:03.629351  446965 system_pods.go:89] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.629364  446965 system_pods.go:89] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.629370  446965 system_pods.go:89] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.629377  446965 system_pods.go:89] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.629381  446965 system_pods.go:89] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.629386  446965 system_pods.go:89] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.629391  446965 system_pods.go:89] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.629399  446965 system_pods.go:89] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.629405  446965 system_pods.go:89] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.629418  446965 system_pods.go:126] duration metric: took 201.847233ms to wait for k8s-apps to be running ...
	I1030 19:51:03.629432  446965 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:03.629486  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:03.649120  446965 system_svc.go:56] duration metric: took 19.675022ms WaitForService to wait for kubelet
	I1030 19:51:03.649166  446965 kubeadm.go:582] duration metric: took 10.635844977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:03.649192  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:03.826763  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:03.826790  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:03.826803  446965 node_conditions.go:105] duration metric: took 177.604616ms to run NodePressure ...
	I1030 19:51:03.826819  446965 start.go:241] waiting for startup goroutines ...
	I1030 19:51:03.826827  446965 start.go:246] waiting for cluster config update ...
	I1030 19:51:03.826841  446965 start.go:255] writing updated cluster config ...
	I1030 19:51:03.827126  446965 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:03.877974  446965 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:03.880121  446965 out.go:177] * Done! kubectl is now configured to use "embed-certs-042402" cluster and "default" namespace by default
	I1030 19:51:00.833673  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:03.333042  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:05.333431  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:07.833229  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:09.833772  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:10.833131  446736 pod_ready.go:82] duration metric: took 4m0.006526983s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:51:10.833166  446736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:51:10.833178  446736 pod_ready.go:39] duration metric: took 4m7.416690025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:10.833200  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:10.833239  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:10.833300  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:10.884016  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:10.884046  446736 cri.go:89] found id: ""
	I1030 19:51:10.884055  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:10.884108  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.888789  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:10.888857  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:10.931994  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:10.932037  446736 cri.go:89] found id: ""
	I1030 19:51:10.932047  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:10.932097  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.937113  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:10.937181  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:10.977951  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:10.977982  446736 cri.go:89] found id: ""
	I1030 19:51:10.977993  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:10.978050  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.982791  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:10.982863  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:11.021741  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.021770  446736 cri.go:89] found id: ""
	I1030 19:51:11.021780  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:11.021837  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.026590  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:11.026653  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:11.068839  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.068873  446736 cri.go:89] found id: ""
	I1030 19:51:11.068885  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:11.068946  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.073103  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:11.073171  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:11.108404  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.108432  446736 cri.go:89] found id: ""
	I1030 19:51:11.108443  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:11.108506  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.112903  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:11.112974  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:11.153767  446736 cri.go:89] found id: ""
	I1030 19:51:11.153800  446736 logs.go:282] 0 containers: []
	W1030 19:51:11.153812  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:11.153821  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:11.153892  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:11.194649  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.194681  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.194687  446736 cri.go:89] found id: ""
	I1030 19:51:11.194697  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:11.194770  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.199037  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.202957  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:11.202984  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:11.246187  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:11.246220  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.286608  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:11.286643  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.339119  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:11.339157  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.376624  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:11.376653  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.411401  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:11.411431  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:11.481668  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:11.481710  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:11.497767  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:11.497799  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:11.612001  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:11.612034  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:11.656553  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:11.656589  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:11.695387  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:11.695428  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.732386  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:11.732419  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:12.217007  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:12.217056  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:14.769155  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:14.787096  446736 api_server.go:72] duration metric: took 4m17.097569041s to wait for apiserver process to appear ...
	I1030 19:51:14.787128  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:14.787176  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:14.787235  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:14.823506  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:14.823533  446736 cri.go:89] found id: ""
	I1030 19:51:14.823541  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:14.823595  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.828125  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:14.828214  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:14.867890  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:14.867914  446736 cri.go:89] found id: ""
	I1030 19:51:14.867922  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:14.867970  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.873213  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:14.873283  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:14.913068  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:14.913103  446736 cri.go:89] found id: ""
	I1030 19:51:14.913114  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:14.913179  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.918380  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:14.918459  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:14.956150  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:14.956177  446736 cri.go:89] found id: ""
	I1030 19:51:14.956187  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:14.956294  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.960781  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:14.960836  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:15.001804  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.001833  446736 cri.go:89] found id: ""
	I1030 19:51:15.001844  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:15.001893  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.006341  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:15.006401  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:15.045202  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.045236  446736 cri.go:89] found id: ""
	I1030 19:51:15.045247  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:15.045326  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.051967  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:15.052031  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:15.091569  446736 cri.go:89] found id: ""
	I1030 19:51:15.091596  446736 logs.go:282] 0 containers: []
	W1030 19:51:15.091604  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:15.091611  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:15.091668  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:15.135521  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:15.135551  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:15.135557  446736 cri.go:89] found id: ""
	I1030 19:51:15.135567  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:15.135633  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.140215  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.145490  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:15.145514  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:15.205939  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:15.205972  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:15.240157  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:15.240194  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.277168  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:15.277200  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:15.708451  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:15.708499  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:15.750544  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:15.750577  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:15.820071  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:15.820113  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:15.870259  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:15.870293  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:15.919968  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:15.919998  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.976948  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:15.976992  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:16.014451  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:16.014498  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:16.047766  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:16.047806  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:16.070539  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:16.070567  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:18.677834  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:51:18.682862  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:51:18.684023  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:18.684046  446736 api_server.go:131] duration metric: took 3.896911154s to wait for apiserver health ...
	I1030 19:51:18.684055  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:18.684083  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:18.684130  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:18.724815  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:18.724848  446736 cri.go:89] found id: ""
	I1030 19:51:18.724860  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:18.724928  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.729332  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:18.729391  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:18.767614  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:18.767642  446736 cri.go:89] found id: ""
	I1030 19:51:18.767651  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:18.767705  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.772420  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:18.772525  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:18.811459  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:18.811489  446736 cri.go:89] found id: ""
	I1030 19:51:18.811501  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:18.811563  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.816844  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:18.816906  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:18.853273  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:18.853299  446736 cri.go:89] found id: ""
	I1030 19:51:18.853308  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:18.853362  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.857867  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:18.857946  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:18.907021  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:18.907052  446736 cri.go:89] found id: ""
	I1030 19:51:18.907063  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:18.907126  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.913432  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:18.913506  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:18.978047  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:18.978072  446736 cri.go:89] found id: ""
	I1030 19:51:18.978083  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:18.978150  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.983158  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:18.983241  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:19.018992  446736 cri.go:89] found id: ""
	I1030 19:51:19.019018  446736 logs.go:282] 0 containers: []
	W1030 19:51:19.019026  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:19.019035  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:19.019094  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:19.053821  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.053850  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.053855  446736 cri.go:89] found id: ""
	I1030 19:51:19.053862  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:19.053922  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.063575  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.069254  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:19.069283  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:19.139641  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:19.139700  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:19.198020  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:19.198059  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:19.239685  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:19.239727  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:19.281510  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:19.281545  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.317842  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:19.317872  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:19.659645  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:19.659697  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:19.678087  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:19.678121  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:19.778504  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:19.778540  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:19.826520  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:19.826552  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:19.863959  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:19.864011  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:19.915777  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:19.915814  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.953036  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:19.953069  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:22.502129  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:51:22.502162  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.502167  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.502172  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.502175  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.502179  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.502182  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.502188  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.502193  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.502201  446736 system_pods.go:74] duration metric: took 3.818141259s to wait for pod list to return data ...
	I1030 19:51:22.502209  446736 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:22.504541  446736 default_sa.go:45] found service account: "default"
	I1030 19:51:22.504562  446736 default_sa.go:55] duration metric: took 2.346763ms for default service account to be created ...
	I1030 19:51:22.504570  446736 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:22.509016  446736 system_pods.go:86] 8 kube-system pods found
	I1030 19:51:22.509039  446736 system_pods.go:89] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.509044  446736 system_pods.go:89] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.509048  446736 system_pods.go:89] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.509052  446736 system_pods.go:89] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.509055  446736 system_pods.go:89] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.509058  446736 system_pods.go:89] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.509101  446736 system_pods.go:89] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.509112  446736 system_pods.go:89] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.509119  446736 system_pods.go:126] duration metric: took 4.544102ms to wait for k8s-apps to be running ...
	I1030 19:51:22.509125  446736 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:22.509172  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:22.524883  446736 system_svc.go:56] duration metric: took 15.747977ms WaitForService to wait for kubelet
	I1030 19:51:22.524906  446736 kubeadm.go:582] duration metric: took 4m24.835384605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:22.524929  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:22.528315  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:22.528334  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:22.528345  446736 node_conditions.go:105] duration metric: took 3.411421ms to run NodePressure ...
	I1030 19:51:22.528357  446736 start.go:241] waiting for startup goroutines ...
	I1030 19:51:22.528364  446736 start.go:246] waiting for cluster config update ...
	I1030 19:51:22.528374  446736 start.go:255] writing updated cluster config ...
	I1030 19:51:22.528621  446736 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:22.577143  446736 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:22.580061  446736 out.go:177] * Done! kubectl is now configured to use "no-preload-960512" cluster and "default" namespace by default
	I1030 19:52:15.582907  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:52:15.583009  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:52:15.584345  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:15.584419  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:15.584522  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:15.584659  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:15.584763  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:15.584827  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:15.586931  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:15.587016  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:15.587074  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:15.587145  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:15.587198  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:15.587271  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:15.587339  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:15.587402  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:15.587455  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:15.587517  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:15.587577  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:15.587608  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:15.587682  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:15.587759  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:15.587846  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:15.587924  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:15.587988  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:15.588076  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:15.588148  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:15.588180  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:15.588267  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:15.589722  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:15.589834  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:15.589932  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:15.590014  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:15.590128  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:15.590285  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:15.590336  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:15.590388  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590560  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590642  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590842  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590946  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591155  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591253  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591513  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591609  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591841  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591855  447486 kubeadm.go:310] 
	I1030 19:52:15.591900  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:52:15.591956  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:52:15.591966  447486 kubeadm.go:310] 
	I1030 19:52:15.592008  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:52:15.592051  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:52:15.592192  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:52:15.592204  447486 kubeadm.go:310] 
	I1030 19:52:15.592318  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:52:15.592360  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:52:15.592391  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:52:15.592397  447486 kubeadm.go:310] 
	I1030 19:52:15.592511  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:52:15.592592  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:52:15.592600  447486 kubeadm.go:310] 
	I1030 19:52:15.592733  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:52:15.592850  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:52:15.592959  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:52:15.593059  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:52:15.593138  447486 kubeadm.go:310] 
	W1030 19:52:15.593236  447486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:52:15.593289  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:52:16.049810  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:52:16.065820  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:52:16.076166  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:52:16.076192  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:52:16.076241  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:52:16.085309  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:52:16.085380  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:52:16.094868  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:52:16.104343  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:52:16.104395  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:52:16.113939  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.122836  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:52:16.122885  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.132083  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:52:16.141441  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:52:16.141487  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:52:16.150710  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:52:16.222070  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:16.222183  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:16.366061  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:16.366194  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:16.366352  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:16.541086  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:16.543200  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:16.543303  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:16.543398  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:16.543523  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:16.543625  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:16.543749  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:16.543848  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:16.543942  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:16.544020  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:16.544096  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:16.544193  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:16.544252  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:16.544343  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:16.637454  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:16.829430  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:16.985259  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:17.072312  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:17.092511  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:17.093595  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:17.093654  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:17.228039  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:17.229647  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:17.229766  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:17.237333  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:17.239644  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:17.239774  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:17.241037  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:57.243167  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:57.243769  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:57.244072  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:02.244240  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:02.244563  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:12.244991  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:12.245293  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:32.246428  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:32.246697  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.247834  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:54:12.248150  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.248173  447486 kubeadm.go:310] 
	I1030 19:54:12.248226  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:54:12.248308  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:54:12.248336  447486 kubeadm.go:310] 
	I1030 19:54:12.248386  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:54:12.248449  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:54:12.248598  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:54:12.248609  447486 kubeadm.go:310] 
	I1030 19:54:12.248747  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:54:12.248811  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:54:12.248867  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:54:12.248876  447486 kubeadm.go:310] 
	I1030 19:54:12.249013  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:54:12.249111  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:54:12.249129  447486 kubeadm.go:310] 
	I1030 19:54:12.249280  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:54:12.249447  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:54:12.249564  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:54:12.249662  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:54:12.249708  447486 kubeadm.go:310] 
	I1030 19:54:12.249878  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:54:12.250015  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:54:12.250208  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:54:12.250221  447486 kubeadm.go:394] duration metric: took 7m57.874179721s to StartCluster
	I1030 19:54:12.250311  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:54:12.250399  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:54:12.292692  447486 cri.go:89] found id: ""
	I1030 19:54:12.292749  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.292760  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:54:12.292770  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:54:12.292840  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:54:12.329792  447486 cri.go:89] found id: ""
	I1030 19:54:12.329825  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.329835  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:54:12.329843  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:54:12.329905  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:54:12.364661  447486 cri.go:89] found id: ""
	I1030 19:54:12.364693  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.364702  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:54:12.364709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:54:12.364764  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:54:12.400842  447486 cri.go:89] found id: ""
	I1030 19:54:12.400870  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.400878  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:54:12.400885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:54:12.400943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:54:12.440135  447486 cri.go:89] found id: ""
	I1030 19:54:12.440164  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.440172  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:54:12.440178  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:54:12.440228  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:54:12.476365  447486 cri.go:89] found id: ""
	I1030 19:54:12.476403  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.476416  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:54:12.476425  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:54:12.476503  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:54:12.519669  447486 cri.go:89] found id: ""
	I1030 19:54:12.519702  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.519715  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:54:12.519724  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:54:12.519791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:54:12.554180  447486 cri.go:89] found id: ""
	I1030 19:54:12.554218  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.554230  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:54:12.554244  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:54:12.554261  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:54:12.669617  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:54:12.669660  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:54:12.708361  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:54:12.708392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:54:12.763103  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:54:12.763145  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:54:12.778676  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:54:12.778712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:54:12.865694  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1030 19:54:12.865732  447486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:54:12.865797  447486 out.go:270] * 
	W1030 19:54:12.865908  447486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.865929  447486 out.go:270] * 
	W1030 19:54:12.867124  447486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:54:12.871111  447486 out.go:201] 
	W1030 19:54:12.872534  447486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.872591  447486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:54:12.872616  447486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:54:12.874145  447486 out.go:201] 
	
	
	==> CRI-O <==
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.311514783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318598311475311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2016082-8c94-48bd-a52f-ea68713ff487 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.312182018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb5b3afb-7414-4aae-91e6-d596bde325cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.312234349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb5b3afb-7414-4aae-91e6-d596bde325cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.312266157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eb5b3afb-7414-4aae-91e6-d596bde325cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.347005213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6e456dc-a5e3-4dc2-a4a6-6feb59ecbd55 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.347092388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6e456dc-a5e3-4dc2-a4a6-6feb59ecbd55 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.348200438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbd32063-c2df-4f2d-9ec8-53d665ef7aff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.348700985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318598348678200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbd32063-c2df-4f2d-9ec8-53d665ef7aff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.349383422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2bfc91b-e5bc-4f75-a3b4-53b73f4d12c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.349441044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2bfc91b-e5bc-4f75-a3b4-53b73f4d12c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.349473365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a2bfc91b-e5bc-4f75-a3b4-53b73f4d12c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.385393092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=309e4047-19ce-44a6-a185-48cdb2e6fec9 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.385477718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=309e4047-19ce-44a6-a185-48cdb2e6fec9 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.386843753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ac66959-93b3-4c40-9549-f340ab78300f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.387224207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318598387195906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ac66959-93b3-4c40-9549-f340ab78300f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.387737552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60cff07d-cb57-42ea-a0cb-bdbd0f255b0f name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.387832554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60cff07d-cb57-42ea-a0cb-bdbd0f255b0f name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.387867274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60cff07d-cb57-42ea-a0cb-bdbd0f255b0f name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.420660680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59520143-4200-4eb5-88c3-12e8ce141cb4 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.420860712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59520143-4200-4eb5-88c3-12e8ce141cb4 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.422357702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6d50ffd-b061-4830-bdc6-a099152e1b17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.422715660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318598422697962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6d50ffd-b061-4830-bdc6-a099152e1b17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.423358228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e56c174-5122-4ede-9208-b4e3ca4d5fd3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.423406709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e56c174-5122-4ede-9208-b4e3ca4d5fd3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:03:18 old-k8s-version-516975 crio[630]: time="2024-10-30 20:03:18.423448573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8e56c174-5122-4ede-9208-b4e3ca4d5fd3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct30 19:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055573] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039872] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137495] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.588302] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607660] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct30 19:46] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.060505] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061237] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.181319] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.145340] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.258638] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.609500] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.068837] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.029529] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.374948] kauditd_printk_skb: 46 callbacks suppressed
	[Oct30 19:50] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Oct30 19:52] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +0.064946] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:03:18 up 17 min,  0 users,  load average: 0.06, 0.07, 0.03
	Linux old-k8s-version-516975 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: net.(*Dialer).DialContext(0xc0001cc180, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bf22a0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000957100, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bf22a0, 0x24, 0x60, 0x7f743d2b8de8, 0x118, ...)
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: net/http.(*Transport).dial(0xc00068fb80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bf22a0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: net/http.(*Transport).dialConn(0xc00068fb80, 0x4f7fe00, 0xc000120018, 0x0, 0xc000bd8300, 0x5, 0xc000bf22a0, 0x24, 0x0, 0xc000a078c0, ...)
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: net/http.(*Transport).dialConnFor(0xc00068fb80, 0xc000b43a20)
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: created by net/http.(*Transport).queueForDial
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: goroutine 159 [select]:
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: net.(*netFD).connect.func2(0x4f7fe40, 0xc00031bb60, 0xc000a08c00, 0xc000bfc540, 0xc000bfc4e0)
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]: created by net.(*netFD).connect
	Oct 30 20:03:12 old-k8s-version-516975 kubelet[6544]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Oct 30 20:03:13 old-k8s-version-516975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 30 20:03:13 old-k8s-version-516975 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 30 20:03:13 old-k8s-version-516975 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 30 20:03:13 old-k8s-version-516975 kubelet[6552]: I1030 20:03:13.687530    6552 server.go:416] Version: v1.20.0
	Oct 30 20:03:13 old-k8s-version-516975 kubelet[6552]: I1030 20:03:13.687749    6552 server.go:837] Client rotation is on, will bootstrap in background
	Oct 30 20:03:13 old-k8s-version-516975 kubelet[6552]: I1030 20:03:13.689610    6552 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 30 20:03:13 old-k8s-version-516975 kubelet[6552]: I1030 20:03:13.690578    6552 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 30 20:03:13 old-k8s-version-516975 kubelet[6552]: W1030 20:03:13.690734    6552 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (225.660007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-516975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-30 20:06:55.755573091 +0000 UTC m=+6372.502756653
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-768989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.082µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-768989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-768989 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-768989 logs -n 25: (1.301171111s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC | 30 Oct 24 20:05 UTC |
	| start   | -p newest-cni-467894 --memory=2200 --alsologtostderr   | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC | 30 Oct 24 20:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC | 30 Oct 24 20:05 UTC |
	| addons  | enable metrics-server -p newest-cni-467894             | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-467894                                   | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-467894                  | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-467894 --memory=2200 --alsologtostderr   | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	| image   | newest-cni-467894 image list                           | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 20:06:18
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 20:06:18.837023  454274 out.go:345] Setting OutFile to fd 1 ...
	I1030 20:06:18.837135  454274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 20:06:18.837144  454274 out.go:358] Setting ErrFile to fd 2...
	I1030 20:06:18.837148  454274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 20:06:18.837345  454274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 20:06:18.837870  454274 out.go:352] Setting JSON to false
	I1030 20:06:18.838854  454274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13722,"bootTime":1730305057,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 20:06:18.838962  454274 start.go:139] virtualization: kvm guest
	I1030 20:06:18.841269  454274 out.go:177] * [newest-cni-467894] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 20:06:18.842845  454274 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 20:06:18.842888  454274 notify.go:220] Checking for updates...
	I1030 20:06:18.845550  454274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 20:06:18.846978  454274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 20:06:18.848146  454274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 20:06:18.849324  454274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 20:06:18.850653  454274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 20:06:18.852300  454274 config.go:182] Loaded profile config "newest-cni-467894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:06:18.852701  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:18.852799  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:18.869916  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I1030 20:06:18.870346  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:18.870936  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:18.870955  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:18.871320  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:18.871558  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:18.871803  454274 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 20:06:18.872186  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:18.872224  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:18.888414  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I1030 20:06:18.888867  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:18.889379  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:18.889409  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:18.889745  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:18.889985  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:18.925257  454274 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 20:06:18.926711  454274 start.go:297] selected driver: kvm2
	I1030 20:06:18.926730  454274 start.go:901] validating driver "kvm2" against &{Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-467894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 20:06:18.926848  454274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 20:06:18.927551  454274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 20:06:18.927621  454274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 20:06:18.942533  454274 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 20:06:18.942930  454274 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1030 20:06:18.942965  454274 cni.go:84] Creating CNI manager for ""
	I1030 20:06:18.943014  454274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 20:06:18.943050  454274 start.go:340] cluster config:
	{Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-467894 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 20:06:18.943150  454274 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 20:06:18.945006  454274 out.go:177] * Starting "newest-cni-467894" primary control-plane node in "newest-cni-467894" cluster
	I1030 20:06:18.946411  454274 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 20:06:18.946461  454274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 20:06:18.946468  454274 cache.go:56] Caching tarball of preloaded images
	I1030 20:06:18.946588  454274 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 20:06:18.946603  454274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 20:06:18.946702  454274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/config.json ...
	I1030 20:06:18.946886  454274 start.go:360] acquireMachinesLock for newest-cni-467894: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 20:06:18.946935  454274 start.go:364] duration metric: took 28.797µs to acquireMachinesLock for "newest-cni-467894"
	I1030 20:06:18.946955  454274 start.go:96] Skipping create...Using existing machine configuration
	I1030 20:06:18.946964  454274 fix.go:54] fixHost starting: 
	I1030 20:06:18.947245  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:18.947284  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:18.962033  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I1030 20:06:18.962499  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:18.962985  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:18.963006  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:18.963335  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:18.963517  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:18.963658  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:06:18.965172  454274 fix.go:112] recreateIfNeeded on newest-cni-467894: state=Stopped err=<nil>
	I1030 20:06:18.965193  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	W1030 20:06:18.965380  454274 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 20:06:18.968240  454274 out.go:177] * Restarting existing kvm2 VM for "newest-cni-467894" ...
	I1030 20:06:18.969573  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Start
	I1030 20:06:18.969725  454274 main.go:141] libmachine: (newest-cni-467894) Ensuring networks are active...
	I1030 20:06:18.970477  454274 main.go:141] libmachine: (newest-cni-467894) Ensuring network default is active
	I1030 20:06:18.970782  454274 main.go:141] libmachine: (newest-cni-467894) Ensuring network mk-newest-cni-467894 is active
	I1030 20:06:18.971167  454274 main.go:141] libmachine: (newest-cni-467894) Getting domain xml...
	I1030 20:06:18.971912  454274 main.go:141] libmachine: (newest-cni-467894) Creating domain...
	I1030 20:06:20.266615  454274 main.go:141] libmachine: (newest-cni-467894) Waiting to get IP...
	I1030 20:06:20.267593  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:20.268014  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:20.268106  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:20.267988  454309 retry.go:31] will retry after 291.267282ms: waiting for machine to come up
	I1030 20:06:20.560487  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:20.560945  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:20.560975  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:20.560898  454309 retry.go:31] will retry after 308.691445ms: waiting for machine to come up
	I1030 20:06:20.871480  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:20.872004  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:20.872041  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:20.871967  454309 retry.go:31] will retry after 463.204508ms: waiting for machine to come up
	I1030 20:06:21.336776  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:21.337282  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:21.337311  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:21.337243  454309 retry.go:31] will retry after 470.315578ms: waiting for machine to come up
	I1030 20:06:21.808817  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:21.809222  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:21.809251  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:21.809162  454309 retry.go:31] will retry after 728.784541ms: waiting for machine to come up
	I1030 20:06:22.539275  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:22.539790  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:22.539820  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:22.539730  454309 retry.go:31] will retry after 598.889303ms: waiting for machine to come up
	I1030 20:06:23.140668  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:23.141288  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:23.141336  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:23.141225  454309 retry.go:31] will retry after 1.120774657s: waiting for machine to come up
	I1030 20:06:24.263355  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:24.263822  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:24.263843  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:24.263776  454309 retry.go:31] will retry after 1.10623523s: waiting for machine to come up
	I1030 20:06:25.371739  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:25.372219  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:25.372248  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:25.372159  454309 retry.go:31] will retry after 1.404013811s: waiting for machine to come up
	I1030 20:06:26.777705  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:26.778214  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:26.778242  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:26.778157  454309 retry.go:31] will retry after 1.941521789s: waiting for machine to come up
	I1030 20:06:28.722450  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:28.722971  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:28.722999  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:28.722927  454309 retry.go:31] will retry after 2.171775282s: waiting for machine to come up
	I1030 20:06:30.896707  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:30.897178  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:30.897201  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:30.897112  454309 retry.go:31] will retry after 3.451516863s: waiting for machine to come up
	I1030 20:06:34.350156  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:34.350473  454274 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:06:34.350535  454274 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:06:34.350429  454309 retry.go:31] will retry after 2.819153837s: waiting for machine to come up
	I1030 20:06:37.171511  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.171861  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has current primary IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.171885  454274 main.go:141] libmachine: (newest-cni-467894) Found IP for machine: 192.168.50.214
	I1030 20:06:37.171897  454274 main.go:141] libmachine: (newest-cni-467894) Reserving static IP address...
	I1030 20:06:37.172287  454274 main.go:141] libmachine: (newest-cni-467894) Reserved static IP address: 192.168.50.214
	I1030 20:06:37.172310  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "newest-cni-467894", mac: "52:54:00:7b:de:75", ip: "192.168.50.214"} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.172329  454274 main.go:141] libmachine: (newest-cni-467894) Waiting for SSH to be available...
	I1030 20:06:37.172360  454274 main.go:141] libmachine: (newest-cni-467894) DBG | skip adding static IP to network mk-newest-cni-467894 - found existing host DHCP lease matching {name: "newest-cni-467894", mac: "52:54:00:7b:de:75", ip: "192.168.50.214"}
	I1030 20:06:37.172377  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Getting to WaitForSSH function...
	I1030 20:06:37.174208  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.174579  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.174614  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.174707  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Using SSH client type: external
	I1030 20:06:37.174752  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa (-rw-------)
	I1030 20:06:37.174802  454274 main.go:141] libmachine: (newest-cni-467894) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 20:06:37.174821  454274 main.go:141] libmachine: (newest-cni-467894) DBG | About to run SSH command:
	I1030 20:06:37.174842  454274 main.go:141] libmachine: (newest-cni-467894) DBG | exit 0
	I1030 20:06:37.298414  454274 main.go:141] libmachine: (newest-cni-467894) DBG | SSH cmd err, output: <nil>: 
	I1030 20:06:37.298816  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetConfigRaw
	I1030 20:06:37.299486  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:06:37.301785  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.302072  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.302105  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.302340  454274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/config.json ...
	I1030 20:06:37.302541  454274 machine.go:93] provisionDockerMachine start ...
	I1030 20:06:37.302562  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:37.302785  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:37.305169  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.305458  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.305486  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.305709  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:37.305899  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:37.306063  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:37.306202  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:37.306359  454274 main.go:141] libmachine: Using SSH client type: native
	I1030 20:06:37.306603  454274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:06:37.306616  454274 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 20:06:37.406655  454274 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 20:06:37.406685  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetMachineName
	I1030 20:06:37.406936  454274 buildroot.go:166] provisioning hostname "newest-cni-467894"
	I1030 20:06:37.406964  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetMachineName
	I1030 20:06:37.407172  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:37.409634  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.410002  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.410037  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.410168  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:37.410329  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:37.410445  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:37.410563  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:37.410763  454274 main.go:141] libmachine: Using SSH client type: native
	I1030 20:06:37.410943  454274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:06:37.410952  454274 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-467894 && echo "newest-cni-467894" | sudo tee /etc/hostname
	I1030 20:06:37.525083  454274 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-467894
	
	I1030 20:06:37.525116  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:37.527655  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.527968  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.527998  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.528126  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:37.528320  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:37.528485  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:37.528657  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:37.528812  454274 main.go:141] libmachine: Using SSH client type: native
	I1030 20:06:37.528992  454274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:06:37.529007  454274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-467894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-467894/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-467894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 20:06:37.639300  454274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 20:06:37.639330  454274 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 20:06:37.639353  454274 buildroot.go:174] setting up certificates
	I1030 20:06:37.639363  454274 provision.go:84] configureAuth start
	I1030 20:06:37.639372  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetMachineName
	I1030 20:06:37.639688  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:06:37.642381  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.642772  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.642797  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.642996  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:37.644957  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.645259  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.645281  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.645415  454274 provision.go:143] copyHostCerts
	I1030 20:06:37.645510  454274 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 20:06:37.645528  454274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 20:06:37.645601  454274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 20:06:37.645702  454274 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 20:06:37.645713  454274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 20:06:37.645749  454274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 20:06:37.645827  454274 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 20:06:37.645835  454274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 20:06:37.645868  454274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 20:06:37.645932  454274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.newest-cni-467894 san=[127.0.0.1 192.168.50.214 localhost minikube newest-cni-467894]
	I1030 20:06:37.888522  454274 provision.go:177] copyRemoteCerts
	I1030 20:06:37.888579  454274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 20:06:37.888610  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:37.891584  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.891861  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:37.891897  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:37.892064  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:37.892293  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:37.892476  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:37.892604  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:37.972302  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 20:06:37.995267  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 20:06:38.020076  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 20:06:38.042957  454274 provision.go:87] duration metric: took 403.580298ms to configureAuth
	I1030 20:06:38.042986  454274 buildroot.go:189] setting minikube options for container-runtime
	I1030 20:06:38.043194  454274 config.go:182] Loaded profile config "newest-cni-467894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:06:38.043280  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:38.045941  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.046333  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:38.046369  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.046552  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:38.046742  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:38.046922  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:38.047037  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:38.047175  454274 main.go:141] libmachine: Using SSH client type: native
	I1030 20:06:38.047381  454274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:06:38.047398  454274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 20:06:38.257219  454274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 20:06:38.257257  454274 machine.go:96] duration metric: took 954.699706ms to provisionDockerMachine
	I1030 20:06:38.257273  454274 start.go:293] postStartSetup for "newest-cni-467894" (driver="kvm2")
	I1030 20:06:38.257285  454274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 20:06:38.257309  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:38.257648  454274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 20:06:38.257687  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:38.260110  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.260483  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:38.260511  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.260683  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:38.260899  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:38.261043  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:38.261201  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:38.342077  454274 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 20:06:38.346322  454274 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 20:06:38.346346  454274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 20:06:38.346437  454274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 20:06:38.346562  454274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 20:06:38.346674  454274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 20:06:38.356539  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 20:06:38.379229  454274 start.go:296] duration metric: took 121.940479ms for postStartSetup
	I1030 20:06:38.379277  454274 fix.go:56] duration metric: took 19.43231194s for fixHost
	I1030 20:06:38.379302  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:38.381928  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.382294  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:38.382328  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.382503  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:38.382698  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:38.382850  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:38.382986  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:38.383123  454274 main.go:141] libmachine: Using SSH client type: native
	I1030 20:06:38.383293  454274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:06:38.383306  454274 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 20:06:38.482786  454274 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730318798.456999074
	
	I1030 20:06:38.482811  454274 fix.go:216] guest clock: 1730318798.456999074
	I1030 20:06:38.482819  454274 fix.go:229] Guest: 2024-10-30 20:06:38.456999074 +0000 UTC Remote: 2024-10-30 20:06:38.379282102 +0000 UTC m=+19.581843805 (delta=77.716972ms)
	I1030 20:06:38.482843  454274 fix.go:200] guest clock delta is within tolerance: 77.716972ms
	I1030 20:06:38.482851  454274 start.go:83] releasing machines lock for "newest-cni-467894", held for 19.5359041s
	I1030 20:06:38.482873  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:38.483148  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:06:38.485848  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.486296  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:38.486334  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.486534  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:38.487079  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:38.487251  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:38.487332  454274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 20:06:38.487405  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:38.487454  454274 ssh_runner.go:195] Run: cat /version.json
	I1030 20:06:38.487479  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:38.489871  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.490115  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.490259  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:38.490282  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.490433  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:38.490521  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:38.490546  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:38.490612  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:38.490720  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:38.490797  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:38.490881  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:38.490943  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:38.491008  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:38.491150  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:38.590111  454274 ssh_runner.go:195] Run: systemctl --version
	I1030 20:06:38.595705  454274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 20:06:38.743954  454274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 20:06:38.749859  454274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 20:06:38.749938  454274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 20:06:38.767117  454274 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 20:06:38.767146  454274 start.go:495] detecting cgroup driver to use...
	I1030 20:06:38.767234  454274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 20:06:38.784162  454274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 20:06:38.798005  454274 docker.go:217] disabling cri-docker service (if available) ...
	I1030 20:06:38.798068  454274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 20:06:38.811179  454274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 20:06:38.824600  454274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 20:06:38.944940  454274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 20:06:39.091059  454274 docker.go:233] disabling docker service ...
	I1030 20:06:39.091130  454274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 20:06:39.105180  454274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 20:06:39.117877  454274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 20:06:39.244611  454274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 20:06:39.356282  454274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 20:06:39.370081  454274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 20:06:39.387755  454274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 20:06:39.387825  454274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:06:39.398010  454274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 20:06:39.398087  454274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:06:39.408337  454274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:06:39.418480  454274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:06:39.428579  454274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 20:06:39.438726  454274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:06:39.448848  454274 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:06:39.465008  454274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:06:39.475236  454274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 20:06:39.484352  454274 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 20:06:39.484386  454274 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 20:06:39.496909  454274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 20:06:39.506507  454274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 20:06:39.612081  454274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 20:06:39.707205  454274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 20:06:39.707300  454274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 20:06:39.712091  454274 start.go:563] Will wait 60s for crictl version
	I1030 20:06:39.712154  454274 ssh_runner.go:195] Run: which crictl
	I1030 20:06:39.715766  454274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 20:06:39.750520  454274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 20:06:39.750617  454274 ssh_runner.go:195] Run: crio --version
	I1030 20:06:39.779102  454274 ssh_runner.go:195] Run: crio --version
	I1030 20:06:39.809101  454274 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 20:06:39.810393  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:06:39.813174  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:39.813505  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:39.813528  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:39.813751  454274 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 20:06:39.817782  454274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 20:06:39.832640  454274 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1030 20:06:39.833887  454274 kubeadm.go:883] updating cluster {Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-467894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 20:06:39.834030  454274 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 20:06:39.834100  454274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 20:06:39.871024  454274 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 20:06:39.871100  454274 ssh_runner.go:195] Run: which lz4
	I1030 20:06:39.875157  454274 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 20:06:39.879128  454274 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 20:06:39.879160  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 20:06:41.198079  454274 crio.go:462] duration metric: took 1.322953749s to copy over tarball
	I1030 20:06:41.198160  454274 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 20:06:43.347130  454274 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.148938898s)
	I1030 20:06:43.347158  454274 crio.go:469] duration metric: took 2.14904466s to extract the tarball
	I1030 20:06:43.347167  454274 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 20:06:43.384607  454274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 20:06:43.439162  454274 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 20:06:43.439186  454274 cache_images.go:84] Images are preloaded, skipping loading
	I1030 20:06:43.439194  454274 kubeadm.go:934] updating node { 192.168.50.214 8443 v1.31.2 crio true true} ...
	I1030 20:06:43.439306  454274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-467894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-467894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 20:06:43.439372  454274 ssh_runner.go:195] Run: crio config
	I1030 20:06:43.489986  454274 cni.go:84] Creating CNI manager for ""
	I1030 20:06:43.490012  454274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 20:06:43.490028  454274 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1030 20:06:43.490060  454274 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-467894 NodeName:newest-cni-467894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 20:06:43.490249  454274 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-467894"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 20:06:43.490328  454274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 20:06:43.500133  454274 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 20:06:43.500208  454274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 20:06:43.509541  454274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1030 20:06:43.525751  454274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 20:06:43.541669  454274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1030 20:06:43.557669  454274 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I1030 20:06:43.561532  454274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 20:06:43.573167  454274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 20:06:43.693948  454274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 20:06:43.712364  454274 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894 for IP: 192.168.50.214
	I1030 20:06:43.712394  454274 certs.go:194] generating shared ca certs ...
	I1030 20:06:43.712417  454274 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 20:06:43.712608  454274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 20:06:43.712663  454274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 20:06:43.712675  454274 certs.go:256] generating profile certs ...
	I1030 20:06:43.712783  454274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/client.key
	I1030 20:06:43.712863  454274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/apiserver.key.c6aaeabf
	I1030 20:06:43.712925  454274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/proxy-client.key
	I1030 20:06:43.713090  454274 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 20:06:43.713138  454274 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 20:06:43.713148  454274 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 20:06:43.713188  454274 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 20:06:43.713221  454274 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 20:06:43.713250  454274 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 20:06:43.713303  454274 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 20:06:43.713962  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 20:06:43.754697  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 20:06:43.780957  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 20:06:43.817776  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 20:06:43.848884  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 20:06:43.882466  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 20:06:43.905408  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 20:06:43.929396  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 20:06:43.953184  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 20:06:43.975807  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 20:06:43.998640  454274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 20:06:44.021118  454274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 20:06:44.036859  454274 ssh_runner.go:195] Run: openssl version
	I1030 20:06:44.042509  454274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 20:06:44.053340  454274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 20:06:44.057681  454274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 20:06:44.057740  454274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 20:06:44.063614  454274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 20:06:44.074437  454274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 20:06:44.085023  454274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 20:06:44.089237  454274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 20:06:44.089282  454274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 20:06:44.094667  454274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 20:06:44.105226  454274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 20:06:44.115714  454274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 20:06:44.120002  454274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 20:06:44.120055  454274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 20:06:44.125540  454274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 20:06:44.136147  454274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 20:06:44.140364  454274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 20:06:44.145963  454274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 20:06:44.151561  454274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 20:06:44.157183  454274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 20:06:44.162587  454274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 20:06:44.168067  454274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 20:06:44.173429  454274 kubeadm.go:392] StartCluster: {Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-467894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 20:06:44.173508  454274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 20:06:44.173569  454274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 20:06:44.208896  454274 cri.go:89] found id: ""
	I1030 20:06:44.209001  454274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 20:06:44.219347  454274 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 20:06:44.219376  454274 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 20:06:44.219434  454274 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 20:06:44.228963  454274 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 20:06:44.229541  454274 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-467894" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 20:06:44.229831  454274 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-467894" cluster setting kubeconfig missing "newest-cni-467894" context setting]
	I1030 20:06:44.230225  454274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 20:06:44.231517  454274 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 20:06:44.240883  454274 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.214
	I1030 20:06:44.240909  454274 kubeadm.go:1160] stopping kube-system containers ...
	I1030 20:06:44.240921  454274 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 20:06:44.240968  454274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 20:06:44.278604  454274 cri.go:89] found id: ""
	I1030 20:06:44.278677  454274 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 20:06:44.295558  454274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 20:06:44.305139  454274 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 20:06:44.305155  454274 kubeadm.go:157] found existing configuration files:
	
	I1030 20:06:44.305197  454274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 20:06:44.314212  454274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 20:06:44.314273  454274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 20:06:44.323406  454274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 20:06:44.332474  454274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 20:06:44.332530  454274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 20:06:44.341872  454274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 20:06:44.350752  454274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 20:06:44.350803  454274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 20:06:44.359848  454274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 20:06:44.368755  454274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 20:06:44.368808  454274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 20:06:44.377999  454274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 20:06:44.387282  454274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 20:06:44.488761  454274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 20:06:45.346419  454274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 20:06:45.564071  454274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 20:06:45.652813  454274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 20:06:45.765929  454274 api_server.go:52] waiting for apiserver process to appear ...
	I1030 20:06:45.766027  454274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 20:06:46.266434  454274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 20:06:46.766869  454274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 20:06:47.267032  454274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 20:06:47.767048  454274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 20:06:47.789816  454274 api_server.go:72] duration metric: took 2.023896975s to wait for apiserver process to appear ...
	I1030 20:06:47.789850  454274 api_server.go:88] waiting for apiserver healthz status ...
	I1030 20:06:47.789882  454274 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I1030 20:06:47.790422  454274 api_server.go:269] stopped: https://192.168.50.214:8443/healthz: Get "https://192.168.50.214:8443/healthz": dial tcp 192.168.50.214:8443: connect: connection refused
	I1030 20:06:48.290214  454274 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I1030 20:06:50.843411  454274 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 20:06:50.843455  454274 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 20:06:50.843470  454274 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I1030 20:06:50.891275  454274 api_server.go:279] https://192.168.50.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 20:06:50.891305  454274 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 20:06:51.290803  454274 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I1030 20:06:51.296219  454274 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 20:06:51.296244  454274 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 20:06:51.790785  454274 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I1030 20:06:51.797220  454274 api_server.go:279] https://192.168.50.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 20:06:51.797252  454274 api_server.go:103] status: https://192.168.50.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 20:06:52.290854  454274 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I1030 20:06:52.297074  454274 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I1030 20:06:52.310453  454274 api_server.go:141] control plane version: v1.31.2
	I1030 20:06:52.310513  454274 api_server.go:131] duration metric: took 4.520653584s to wait for apiserver health ...
	I1030 20:06:52.310527  454274 cni.go:84] Creating CNI manager for ""
	I1030 20:06:52.310538  454274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 20:06:52.312356  454274 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 20:06:52.313757  454274 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 20:06:52.329846  454274 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 20:06:52.356302  454274 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 20:06:52.379471  454274 system_pods.go:59] 8 kube-system pods found
	I1030 20:06:52.379511  454274 system_pods.go:61] "coredns-7c65d6cfc9-c5dhm" [13346252-3137-4269-a751-43bf15447cf1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 20:06:52.379522  454274 system_pods.go:61] "etcd-newest-cni-467894" [02ad7909-d679-424f-a675-9b5e46591f43] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 20:06:52.379532  454274 system_pods.go:61] "kube-apiserver-newest-cni-467894" [7be09dc8-0817-48b8-b507-000532f979f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 20:06:52.379541  454274 system_pods.go:61] "kube-controller-manager-newest-cni-467894" [a2c43507-d1a7-4675-af56-d38c9587dc65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 20:06:52.379549  454274 system_pods.go:61] "kube-proxy-s7j9r" [398b09fc-5138-44cc-bd9c-6c4030a79d02] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 20:06:52.379557  454274 system_pods.go:61] "kube-scheduler-newest-cni-467894" [fbe99ddc-adc4-420f-92cc-c8667f9f9e4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 20:06:52.379569  454274 system_pods.go:61] "metrics-server-6867b74b74-gfb5l" [644ff5c8-63b9-45ab-992c-6a8a26e89d0d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 20:06:52.379578  454274 system_pods.go:61] "storage-provisioner" [368833fe-5b11-4254-8c0b-5afe112cabde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 20:06:52.379590  454274 system_pods.go:74] duration metric: took 23.253791ms to wait for pod list to return data ...
	I1030 20:06:52.379603  454274 node_conditions.go:102] verifying NodePressure condition ...
	I1030 20:06:52.393355  454274 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 20:06:52.393386  454274 node_conditions.go:123] node cpu capacity is 2
	I1030 20:06:52.393400  454274 node_conditions.go:105] duration metric: took 13.791378ms to run NodePressure ...
	I1030 20:06:52.393424  454274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 20:06:52.729738  454274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 20:06:52.747681  454274 ops.go:34] apiserver oom_adj: -16
	I1030 20:06:52.747710  454274 kubeadm.go:597] duration metric: took 8.52832511s to restartPrimaryControlPlane
	I1030 20:06:52.747723  454274 kubeadm.go:394] duration metric: took 8.57429957s to StartCluster
	I1030 20:06:52.747742  454274 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 20:06:52.747828  454274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 20:06:52.749097  454274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 20:06:52.749393  454274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 20:06:52.749488  454274 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 20:06:52.749598  454274 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-467894"
	I1030 20:06:52.749619  454274 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-467894"
	I1030 20:06:52.749625  454274 config.go:182] Loaded profile config "newest-cni-467894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:06:52.749627  454274 addons.go:69] Setting default-storageclass=true in profile "newest-cni-467894"
	I1030 20:06:52.749661  454274 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-467894"
	I1030 20:06:52.749676  454274 addons.go:69] Setting dashboard=true in profile "newest-cni-467894"
	I1030 20:06:52.749634  454274 addons.go:69] Setting metrics-server=true in profile "newest-cni-467894"
	I1030 20:06:52.749749  454274 addons.go:234] Setting addon metrics-server=true in "newest-cni-467894"
	W1030 20:06:52.749767  454274 addons.go:243] addon metrics-server should already be in state true
	I1030 20:06:52.749804  454274 host.go:66] Checking if "newest-cni-467894" exists ...
	W1030 20:06:52.749629  454274 addons.go:243] addon storage-provisioner should already be in state true
	I1030 20:06:52.749862  454274 host.go:66] Checking if "newest-cni-467894" exists ...
	I1030 20:06:52.749716  454274 addons.go:234] Setting addon dashboard=true in "newest-cni-467894"
	W1030 20:06:52.749927  454274 addons.go:243] addon dashboard should already be in state true
	I1030 20:06:52.749964  454274 host.go:66] Checking if "newest-cni-467894" exists ...
	I1030 20:06:52.750104  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.750157  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.750155  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.750272  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.750321  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.750355  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.750274  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.750386  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.751139  454274 out.go:177] * Verifying Kubernetes components...
	I1030 20:06:52.752679  454274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 20:06:52.766584  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I1030 20:06:52.766595  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I1030 20:06:52.767142  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.767261  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.767712  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.767739  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.767810  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.767832  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.768161  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.768166  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.768193  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I1030 20:06:52.768539  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I1030 20:06:52.768657  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.768859  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.768908  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.768922  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.768859  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.769094  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.769071  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.769129  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.769366  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.769384  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.769765  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.769779  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.770009  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:06:52.770291  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.770338  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.773016  454274 addons.go:234] Setting addon default-storageclass=true in "newest-cni-467894"
	W1030 20:06:52.773040  454274 addons.go:243] addon default-storageclass should already be in state true
	I1030 20:06:52.773078  454274 host.go:66] Checking if "newest-cni-467894" exists ...
	I1030 20:06:52.773394  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.773435  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.786966  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I1030 20:06:52.787604  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.788589  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.788620  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.788680  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I1030 20:06:52.789146  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1030 20:06:52.789325  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.789334  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.789406  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39945
	I1030 20:06:52.789890  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.789901  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.789966  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:06:52.789974  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.789988  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.790436  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.790464  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.790465  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.790482  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.790510  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.790891  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.790931  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.791308  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:06:52.791319  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:06:52.791750  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:52.791795  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:52.792070  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:52.793129  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:52.793497  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:52.793898  454274 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1030 20:06:52.795119  454274 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 20:06:52.795128  454274 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 20:06:52.796501  454274 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 20:06:52.796520  454274 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 20:06:52.796536  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:52.796502  454274 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1030 20:06:52.796610  454274 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 20:06:52.796625  454274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 20:06:52.796643  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:52.797951  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1030 20:06:52.797968  454274 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1030 20:06:52.797988  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:52.799963  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.800459  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.800609  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:52.800636  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.800877  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:52.801054  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:52.801080  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.801142  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:52.801206  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:52.801339  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:52.801383  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:52.801514  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:52.801508  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:52.801635  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:52.801901  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.802303  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:52.802355  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.802454  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:52.802643  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:52.802782  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:52.802947  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:52.831801  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I1030 20:06:52.832342  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:52.832913  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:52.832942  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:52.833240  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:52.833399  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:06:52.835330  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:52.835566  454274 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 20:06:52.835586  454274 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 20:06:52.835605  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:06:52.838479  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.838896  454274 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:06:52.838921  454274 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:06:52.839139  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:06:52.839354  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:06:52.839516  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:06:52.839649  454274 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:06:53.016864  454274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 20:06:53.045996  454274 api_server.go:52] waiting for apiserver process to appear ...
	I1030 20:06:53.046101  454274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 20:06:53.070980  454274 api_server.go:72] duration metric: took 321.546118ms to wait for apiserver process to appear ...
	I1030 20:06:53.071038  454274 api_server.go:88] waiting for apiserver healthz status ...
	I1030 20:06:53.071060  454274 api_server.go:253] Checking apiserver healthz at https://192.168.50.214:8443/healthz ...
	I1030 20:06:53.077363  454274 api_server.go:279] https://192.168.50.214:8443/healthz returned 200:
	ok
	I1030 20:06:53.078297  454274 api_server.go:141] control plane version: v1.31.2
	I1030 20:06:53.078326  454274 api_server.go:131] duration metric: took 7.279883ms to wait for apiserver health ...
	I1030 20:06:53.078335  454274 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 20:06:53.083922  454274 system_pods.go:59] 8 kube-system pods found
	I1030 20:06:53.083954  454274 system_pods.go:61] "coredns-7c65d6cfc9-c5dhm" [13346252-3137-4269-a751-43bf15447cf1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 20:06:53.083973  454274 system_pods.go:61] "etcd-newest-cni-467894" [02ad7909-d679-424f-a675-9b5e46591f43] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 20:06:53.083991  454274 system_pods.go:61] "kube-apiserver-newest-cni-467894" [7be09dc8-0817-48b8-b507-000532f979f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 20:06:53.084048  454274 system_pods.go:61] "kube-controller-manager-newest-cni-467894" [a2c43507-d1a7-4675-af56-d38c9587dc65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 20:06:53.084062  454274 system_pods.go:61] "kube-proxy-s7j9r" [398b09fc-5138-44cc-bd9c-6c4030a79d02] Running
	I1030 20:06:53.084076  454274 system_pods.go:61] "kube-scheduler-newest-cni-467894" [fbe99ddc-adc4-420f-92cc-c8667f9f9e4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 20:06:53.084087  454274 system_pods.go:61] "metrics-server-6867b74b74-gfb5l" [644ff5c8-63b9-45ab-992c-6a8a26e89d0d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 20:06:53.084094  454274 system_pods.go:61] "storage-provisioner" [368833fe-5b11-4254-8c0b-5afe112cabde] Running
	I1030 20:06:53.084104  454274 system_pods.go:74] duration metric: took 5.76148ms to wait for pod list to return data ...
	I1030 20:06:53.084116  454274 default_sa.go:34] waiting for default service account to be created ...
	I1030 20:06:53.086023  454274 default_sa.go:45] found service account: "default"
	I1030 20:06:53.086041  454274 default_sa.go:55] duration metric: took 1.917031ms for default service account to be created ...
	I1030 20:06:53.086053  454274 kubeadm.go:582] duration metric: took 336.623039ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1030 20:06:53.086078  454274 node_conditions.go:102] verifying NodePressure condition ...
	I1030 20:06:53.090229  454274 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 20:06:53.090260  454274 node_conditions.go:123] node cpu capacity is 2
	I1030 20:06:53.090270  454274 node_conditions.go:105] duration metric: took 4.18439ms to run NodePressure ...
	I1030 20:06:53.090283  454274 start.go:241] waiting for startup goroutines ...
	I1030 20:06:53.166857  454274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 20:06:53.189389  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1030 20:06:53.189430  454274 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1030 20:06:53.199433  454274 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 20:06:53.199461  454274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 20:06:53.202007  454274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 20:06:53.227304  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1030 20:06:53.227329  454274 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1030 20:06:53.260369  454274 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 20:06:53.260408  454274 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 20:06:53.278420  454274 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 20:06:53.278445  454274 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 20:06:53.298308  454274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 20:06:53.325266  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1030 20:06:53.325301  454274 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1030 20:06:53.380873  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1030 20:06:53.380901  454274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1030 20:06:53.442951  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1030 20:06:53.442978  454274 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1030 20:06:53.485530  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1030 20:06:53.485562  454274 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1030 20:06:53.585044  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1030 20:06:53.585071  454274 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1030 20:06:53.641374  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1030 20:06:53.641434  454274 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1030 20:06:53.661022  454274 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1030 20:06:53.661051  454274 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1030 20:06:53.678003  454274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1030 20:06:54.865205  454274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.698288703s)
	I1030 20:06:54.865311  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:54.865314  454274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.56697102s)
	I1030 20:06:54.865222  454274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.663182678s)
	I1030 20:06:54.865351  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:54.865363  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:54.865367  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:54.865388  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:54.865326  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:54.865772  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:54.865825  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Closing plugin on server side
	I1030 20:06:54.865828  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:54.865848  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Closing plugin on server side
	I1030 20:06:54.865857  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:54.865864  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:54.865890  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:54.865934  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:54.865954  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:54.865969  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:54.865987  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:54.866019  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:54.866033  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:54.865990  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:54.866112  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:54.866128  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:54.866140  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Closing plugin on server side
	I1030 20:06:54.866374  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Closing plugin on server side
	I1030 20:06:54.866434  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:54.866453  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:54.866467  454274 addons.go:475] Verifying addon metrics-server=true in "newest-cni-467894"
	I1030 20:06:54.867558  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:54.867576  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:54.878460  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:54.878479  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:54.878742  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:54.878756  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:54.878784  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Closing plugin on server side
	I1030 20:06:55.295234  454274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.617167596s)
	I1030 20:06:55.295335  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:55.295358  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:55.295671  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:55.295691  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:55.295702  454274 main.go:141] libmachine: Making call to close driver server
	I1030 20:06:55.295710  454274 main.go:141] libmachine: (newest-cni-467894) Calling .Close
	I1030 20:06:55.295730  454274 main.go:141] libmachine: (newest-cni-467894) DBG | Closing plugin on server side
	I1030 20:06:55.295978  454274 main.go:141] libmachine: Successfully made call to close driver server
	I1030 20:06:55.295994  454274 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 20:06:55.297575  454274 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-467894 addons enable metrics-server
	
	I1030 20:06:55.298995  454274 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1030 20:06:55.300209  454274 addons.go:510] duration metric: took 2.550721378s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1030 20:06:55.300250  454274 start.go:246] waiting for cluster config update ...
	I1030 20:06:55.300270  454274 start.go:255] writing updated cluster config ...
	I1030 20:06:55.300524  454274 ssh_runner.go:195] Run: rm -f paused
	I1030 20:06:55.349883  454274 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 20:06:55.351473  454274 out.go:177] * Done! kubectl is now configured to use "newest-cni-467894" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.429811780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318816429783380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eafb4c68-cc04-4cb5-89e1-c06ed020d737 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.430457339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c32717e-e8af-4b25-b026-a167e06726e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.430526314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c32717e-e8af-4b25-b026-a167e06726e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.430722302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c32717e-e8af-4b25-b026-a167e06726e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.473812092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54b17004-177c-4255-93d9-c83a4e109026 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.473927118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54b17004-177c-4255-93d9-c83a4e109026 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.476173707Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db278822-7426-4a30-8958-72fe3f76804f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.477330635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318816477282676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db278822-7426-4a30-8958-72fe3f76804f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.480888731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62166098-9ec7-40e8-ab4c-d9e173d3e6be name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.480979946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62166098-9ec7-40e8-ab4c-d9e173d3e6be name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.481290360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62166098-9ec7-40e8-ab4c-d9e173d3e6be name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.523459017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=def9022a-ec51-420b-88e1-8fdd38802384 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.523550294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=def9022a-ec51-420b-88e1-8fdd38802384 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.525412477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=076fb805-9ece-4bdc-9664-233dc9aea0f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.525852700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318816525829554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=076fb805-9ece-4bdc-9664-233dc9aea0f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.526407626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ee49056-c1c1-4619-91a2-bd8971cb8c28 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.526478934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ee49056-c1c1-4619-91a2-bd8971cb8c28 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.526668839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ee49056-c1c1-4619-91a2-bd8971cb8c28 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.563644388Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d031b870-8f93-475c-8bd3-55b8a96722e7 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.563778808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d031b870-8f93-475c-8bd3-55b8a96722e7 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.565626103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccff2bb7-fc91-43c9-94d3-b8bc4e4d6dcf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.566272968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318816566240909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccff2bb7-fc91-43c9-94d3-b8bc4e4d6dcf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.566811968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6793a3a8-be88-4a69-bc8a-aba573d452ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.566898430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6793a3a8-be88-4a69-bc8a-aba573d452ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:56 default-k8s-diff-port-768989 crio[717]: time="2024-10-30 20:06:56.567139131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317569588721723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9feb95ef951b0c048a9dac1a16f3e73c333c80795559f462f312c98fa790d25,PodSandboxId:b819629c91bdf3f182e9186b72a6b70768c85d131814f569b396b6c2b0532dd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317552403394466,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82360fc1-575a-4dc5-86b6-54892c216d65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2,PodSandboxId:23db14501b34e9e2871e2cdf47ed7566d0314e2c7f1fe448f7cd0fde972f8b84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317546502369327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9w8m8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d285a845-e17e-4b87-837b-167dbd29f090,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6,PodSandboxId:c8cbceb7ff00c3459b78a4b3fcc781e513329f3abc6d66c8c9d95661a8cb0db0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317538741541291,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsr5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60ad5830-1
638-4209-a2b3-ef78b1df3d34,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6,PodSandboxId:9675a30e34cc5092451d45b848b57299c1581c8f6cc31126924ee1a698aeffe4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317538746409249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76805df9-1fbf-468d-a909
-3b7c4f889e11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34,PodSandboxId:3b907e7fb753fd91e80fe5ed64787842fcd779d64a891278222ad62576cb6b9f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317534856221481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cf44a5e7947a9469929076700ac904d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf,PodSandboxId:0c31889e0a4babd9b6af9b4632d89838f8ee40f64ab91be3d2fb32707eb9b28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317534870620735,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 9e9660acdf7d90e392d828e83411e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5,PodSandboxId:c821469f94d41f3ce2f8baac7d57363e8e0a81d1f827896b1163ddf5fa11fa97,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317534867443095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e81a98548395acf8a88f8e2057e
b223,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f,PodSandboxId:1af13b544ca5a06b1f328cf6f9c19d4314b735fcecfc8cf957007cfff3a5d7d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317534844606309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-768989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e12c1bbb22b1fe080d147303dcbea
39,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6793a3a8-be88-4a69-bc8a-aba573d452ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60f936bfa2bb3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   9675a30e34cc5       storage-provisioner
	d9feb95ef951b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   b819629c91bdf       busybox
	87e42814a8c59       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   23db14501b34e       coredns-7c65d6cfc9-9w8m8
	8bb328b44b95e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   9675a30e34cc5       storage-provisioner
	2ce5d5edb0018       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      21 minutes ago      Running             kube-proxy                1                   c8cbceb7ff00c       kube-proxy-tsr5q
	0b3881e5bd442       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      21 minutes ago      Running             kube-scheduler            1                   0c31889e0a4ba       kube-scheduler-default-k8s-diff-port-768989
	a1c527b45070a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   c821469f94d41       etcd-default-k8s-diff-port-768989
	ef19f5c9edef4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   1                   3b907e7fb753f       kube-controller-manager-default-k8s-diff-port-768989
	549c7d9c0a8b5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            1                   1af13b544ca5a       kube-apiserver-default-k8s-diff-port-768989
	
	
	==> coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35848 - 31406 "HINFO IN 707585907035877535.584610179630346385. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.012564224s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-768989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-768989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=default-k8s-diff-port-768989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T19_37_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 19:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-768989
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 20:06:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 20:06:30 +0000   Wed, 30 Oct 2024 19:37:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 20:06:30 +0000   Wed, 30 Oct 2024 19:37:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 20:06:30 +0000   Wed, 30 Oct 2024 19:37:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 20:06:30 +0000   Wed, 30 Oct 2024 19:45:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    default-k8s-diff-port-768989
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 59288b73c6724ec2bc5220c45d441063
	  System UUID:                59288b73-c672-4ec2-bc52-20c45d441063
	  Boot ID:                    d059d30a-cab2-4b0e-b3ca-96f6413350b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-7c65d6cfc9-9w8m8                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-768989                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-768989             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-768989    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-tsr5q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-768989             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-t85rd                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-768989 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-768989 event: Registered Node default-k8s-diff-port-768989 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-768989 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-768989 event: Registered Node default-k8s-diff-port-768989 in Controller
	
	
	==> dmesg <==
	[Oct30 19:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000005] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051060] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040306] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.862411] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.429388] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.472505] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.572335] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.056368] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064755] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.184162] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.124119] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.293098] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.221865] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +1.949113] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +0.057016] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.512540] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.514554] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +3.214803] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.351216] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] <==
	{"level":"info","ts":"2024-10-30T20:05:36.525998Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":663238241,"revision":1347,"compact-revision":1105}
	{"level":"warn","ts":"2024-10-30T20:05:51.605671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.009095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:05:51.605862Z","caller":"traceutil/trace.go:171","msg":"trace[545502787] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1604; }","duration":"166.260632ms","start":"2024-10-30T20:05:51.439573Z","end":"2024-10-30T20:05:51.605834Z","steps":["trace[545502787] 'range keys from in-memory index tree'  (duration: 165.946579ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:06:46.050401Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11042143347694641908,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-10-30T20:06:46.507795Z","caller":"traceutil/trace.go:171","msg":"trace[709698222] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1943; }","duration":"957.946045ms","start":"2024-10-30T20:06:45.549819Z","end":"2024-10-30T20:06:46.507765Z","steps":["trace[709698222] 'read index received'  (duration: 957.708773ms)","trace[709698222] 'applied index is now lower than readState.Index'  (duration: 236.654µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-30T20:06:46.507914Z","caller":"traceutil/trace.go:171","msg":"trace[1912966599] transaction","detail":"{read_only:false; response_revision:1648; number_of_response:1; }","duration":"967.222582ms","start":"2024-10-30T20:06:45.540678Z","end":"2024-10-30T20:06:46.507900Z","steps":["trace[1912966599] 'process raft request'  (duration: 966.918706ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:06:46.508152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"898.087648ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:06:46.508232Z","caller":"traceutil/trace.go:171","msg":"trace[2008426032] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1648; }","duration":"898.177493ms","start":"2024-10-30T20:06:45.610039Z","end":"2024-10-30T20:06:46.508216Z","steps":["trace[2008426032] 'agreement among raft nodes before linearized reading'  (duration: 898.011873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:06:46.508281Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T20:06:45.609940Z","time spent":"898.330093ms","remote":"127.0.0.1:51830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":29,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true "}
	{"level":"warn","ts":"2024-10-30T20:06:46.508350Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T20:06:45.540655Z","time spent":"967.282102ms","remote":"127.0.0.1:51774","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-raqx5ksy2ibr6ioqi6sw4pbxhy\" mod_revision:1640 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-raqx5ksy2ibr6ioqi6sw4pbxhy\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-raqx5ksy2ibr6ioqi6sw4pbxhy\" > >"}
	{"level":"warn","ts":"2024-10-30T20:06:46.508524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"958.702115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-10-30T20:06:46.508578Z","caller":"traceutil/trace.go:171","msg":"trace[475290024] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1648; }","duration":"958.75595ms","start":"2024-10-30T20:06:45.549815Z","end":"2024-10-30T20:06:46.508571Z","steps":["trace[475290024] 'agreement among raft nodes before linearized reading'  (duration: 958.645333ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:06:46.508620Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T20:06:45.549785Z","time spent":"958.829538ms","remote":"127.0.0.1:51678","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1153,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-10-30T20:06:46.508769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"802.163397ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:06:46.508811Z","caller":"traceutil/trace.go:171","msg":"trace[512884962] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1648; }","duration":"802.20493ms","start":"2024-10-30T20:06:45.706600Z","end":"2024-10-30T20:06:46.508804Z","steps":["trace[512884962] 'agreement among raft nodes before linearized reading'  (duration: 802.124947ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:06:46.508586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.261726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:06:46.509553Z","caller":"traceutil/trace.go:171","msg":"trace[797952407] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1648; }","duration":"301.234117ms","start":"2024-10-30T20:06:46.208310Z","end":"2024-10-30T20:06:46.509544Z","steps":["trace[797952407] 'agreement among raft nodes before linearized reading'  (duration: 300.242218ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:06:46.509767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T20:06:46.208270Z","time spent":"301.424096ms","remote":"127.0.0.1:51488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-30T20:06:46.751415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.1999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:06:46.751500Z","caller":"traceutil/trace.go:171","msg":"trace[467496112] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1648; }","duration":"240.286094ms","start":"2024-10-30T20:06:46.511193Z","end":"2024-10-30T20:06:46.751479Z","steps":["trace[467496112] 'range keys from in-memory index tree'  (duration: 240.16536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:06:46.751854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.077986ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11042143347694641913 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1646 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-30T20:06:46.751935Z","caller":"traceutil/trace.go:171","msg":"trace[1736719131] linearizableReadLoop","detail":"{readStateIndex:1945; appliedIndex:1944; }","duration":"206.128692ms","start":"2024-10-30T20:06:46.545793Z","end":"2024-10-30T20:06:46.751922Z","steps":["trace[1736719131] 'read index received'  (duration: 89.811238ms)","trace[1736719131] 'applied index is now lower than readState.Index'  (duration: 116.316203ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-30T20:06:46.752241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.426248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-30T20:06:46.753167Z","caller":"traceutil/trace.go:171","msg":"trace[1517261508] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1649; }","duration":"207.368236ms","start":"2024-10-30T20:06:46.545786Z","end":"2024-10-30T20:06:46.753155Z","steps":["trace[1517261508] 'agreement among raft nodes before linearized reading'  (duration: 206.34945ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-30T20:06:46.753005Z","caller":"traceutil/trace.go:171","msg":"trace[132885698] transaction","detail":"{read_only:false; response_revision:1649; number_of_response:1; }","duration":"239.48514ms","start":"2024-10-30T20:06:46.513507Z","end":"2024-10-30T20:06:46.752992Z","steps":["trace[132885698] 'process raft request'  (duration: 122.135211ms)","trace[132885698] 'compare'  (duration: 115.602592ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:06:56 up 21 min,  0 users,  load average: 0.23, 0.14, 0.10
	Linux default-k8s-diff-port-768989 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] <==
	I1030 20:03:38.895781       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:03:38.895860       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 20:05:37.895014       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:05:37.895398       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 20:05:38.897433       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:05:38.897561       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1030 20:05:38.897635       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:05:38.897748       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1030 20:05:38.898929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:05:38.898997       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 20:06:38.899558       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:06:38.899776       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 20:06:38.899880       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:06:38.899939       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 20:06:38.900960       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:06:38.901016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] <==
	E1030 20:01:41.490470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:01:42.076878       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:02:02.386383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="228.376µs"
	E1030 20:02:11.496490       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:02:12.084131       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:02:17.382759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="92.184µs"
	E1030 20:02:41.503697       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:02:42.091917       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:03:11.510651       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:03:12.099237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:03:41.523040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:03:42.108959       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:04:11.530071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:04:12.117291       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:04:41.536401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:04:42.125163       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:05:11.543583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:05:12.134593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:05:41.553295       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:05:42.142366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:06:11.559169       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:06:12.150017       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:06:30.692708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-768989"
	E1030 20:06:41.570706       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:06:42.157698       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 19:45:39.071059       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 19:45:39.079402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	E1030 19:45:39.079478       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 19:45:39.114435       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 19:45:39.114476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 19:45:39.114504       1 server_linux.go:169] "Using iptables Proxier"
	I1030 19:45:39.116747       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 19:45:39.116953       1 server.go:483] "Version info" version="v1.31.2"
	I1030 19:45:39.116978       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:45:39.118385       1 config.go:199] "Starting service config controller"
	I1030 19:45:39.118567       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 19:45:39.118643       1 config.go:105] "Starting endpoint slice config controller"
	I1030 19:45:39.118665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 19:45:39.119250       1 config.go:328] "Starting node config controller"
	I1030 19:45:39.119279       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 19:45:39.218800       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 19:45:39.218862       1 shared_informer.go:320] Caches are synced for service config
	I1030 19:45:39.219401       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] <==
	I1030 19:45:35.719446       1 serving.go:386] Generated self-signed cert in-memory
	W1030 19:45:37.850131       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1030 19:45:37.850421       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 19:45:37.850508       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1030 19:45:37.850533       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1030 19:45:37.880620       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1030 19:45:37.880795       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:45:37.883348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1030 19:45:37.883401       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 19:45:37.883938       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1030 19:45:37.883966       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1030 19:45:37.984209       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 20:06:03 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:03.662691     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318763662204773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:03 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:03.663151     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318763662204773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:04 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:04.365559     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 20:06:13 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:13.664995     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318773664650886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:13 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:13.665530     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318773664650886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:15 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:15.366130     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 20:06:23 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:23.667827     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318783667474925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:23 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:23.668180     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318783667474925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:26 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:26.366444     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 20:06:33 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:33.384444     927 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 20:06:33 default-k8s-diff-port-768989 kubelet[927]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 20:06:33 default-k8s-diff-port-768989 kubelet[927]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 20:06:33 default-k8s-diff-port-768989 kubelet[927]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 20:06:33 default-k8s-diff-port-768989 kubelet[927]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 20:06:33 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:33.670413     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318793669929336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:33 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:33.670445     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318793669929336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:39 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:39.367930     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 20:06:43 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:43.673044     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318803672342623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:43 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:43.673742     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318803672342623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:52 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:52.382973     927 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 30 20:06:52 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:52.383435     927 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 30 20:06:52 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:52.383712     927 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2gbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-t85rd_kube-system(8e162c99-2a94-4340-abe9-f1b312980444): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 30 20:06:52 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:52.385265     927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-t85rd" podUID="8e162c99-2a94-4340-abe9-f1b312980444"
	Oct 30 20:06:53 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:53.675976     927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318813675567737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:53 default-k8s-diff-port-768989 kubelet[927]: E1030 20:06:53.676011     927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318813675567737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] <==
	I1030 19:46:09.700784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 19:46:09.716404       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 19:46:09.716515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 19:46:27.121344       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 19:46:27.121626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-768989_d5517a10-acd3-49e0-9347-34a26e082a72!
	I1030 19:46:27.124572       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f3841af7-4910-4982-8166-6a6276fded3a", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-768989_d5517a10-acd3-49e0-9347-34a26e082a72 became leader
	I1030 19:46:27.222974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-768989_d5517a10-acd3-49e0-9347-34a26e082a72!
	
	
	==> storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] <==
	I1030 19:45:38.938047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1030 19:46:08.944041       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
E1030 20:06:57.604126  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-t85rd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 describe pod metrics-server-6867b74b74-t85rd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-768989 describe pod metrics-server-6867b74b74-t85rd: exit status 1 (70.032339ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-t85rd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-768989 describe pod metrics-server-6867b74b74-t85rd: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (373.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-042402 -n embed-certs-042402
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-30 20:06:19.692669516 +0000 UTC m=+6336.439853057
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-042402 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-042402 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.429µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-042402 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-042402 logs -n 25
E1030 20:06:20.315394  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-042402 logs -n 25: (1.2898334s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC | 30 Oct 24 20:05 UTC |
	| start   | -p newest-cni-467894 --memory=2200 --alsologtostderr   | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC | 30 Oct 24 20:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC | 30 Oct 24 20:05 UTC |
	| addons  | enable metrics-server -p newest-cni-467894             | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-467894                                   | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-467894                  | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC | 30 Oct 24 20:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-467894 --memory=2200 --alsologtostderr   | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:06 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 20:06:18
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 20:06:18.837023  454274 out.go:345] Setting OutFile to fd 1 ...
	I1030 20:06:18.837135  454274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 20:06:18.837144  454274 out.go:358] Setting ErrFile to fd 2...
	I1030 20:06:18.837148  454274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 20:06:18.837345  454274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 20:06:18.837870  454274 out.go:352] Setting JSON to false
	I1030 20:06:18.838854  454274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13722,"bootTime":1730305057,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 20:06:18.838962  454274 start.go:139] virtualization: kvm guest
	I1030 20:06:18.841269  454274 out.go:177] * [newest-cni-467894] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 20:06:18.842845  454274 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 20:06:18.842888  454274 notify.go:220] Checking for updates...
	I1030 20:06:18.845550  454274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 20:06:18.846978  454274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 20:06:18.848146  454274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 20:06:18.849324  454274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 20:06:18.850653  454274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 20:06:18.852300  454274 config.go:182] Loaded profile config "newest-cni-467894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:06:18.852701  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:18.852799  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:18.869916  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I1030 20:06:18.870346  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:18.870936  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:18.870955  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:18.871320  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:18.871558  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:18.871803  454274 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 20:06:18.872186  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:18.872224  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:18.888414  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I1030 20:06:18.888867  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:18.889379  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:18.889409  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:18.889745  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:18.889985  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:18.925257  454274 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 20:06:18.926711  454274 start.go:297] selected driver: kvm2
	I1030 20:06:18.926730  454274 start.go:901] validating driver "kvm2" against &{Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-467894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 20:06:18.926848  454274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 20:06:18.927551  454274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 20:06:18.927621  454274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 20:06:18.942533  454274 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 20:06:18.942930  454274 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1030 20:06:18.942965  454274 cni.go:84] Creating CNI manager for ""
	I1030 20:06:18.943014  454274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 20:06:18.943050  454274 start.go:340] cluster config:
	{Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-467894 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 20:06:18.943150  454274 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 20:06:18.945006  454274 out.go:177] * Starting "newest-cni-467894" primary control-plane node in "newest-cni-467894" cluster
	I1030 20:06:18.946411  454274 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 20:06:18.946461  454274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 20:06:18.946468  454274 cache.go:56] Caching tarball of preloaded images
	I1030 20:06:18.946588  454274 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 20:06:18.946603  454274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 20:06:18.946702  454274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/config.json ...
	I1030 20:06:18.946886  454274 start.go:360] acquireMachinesLock for newest-cni-467894: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 20:06:18.946935  454274 start.go:364] duration metric: took 28.797µs to acquireMachinesLock for "newest-cni-467894"
	I1030 20:06:18.946955  454274 start.go:96] Skipping create...Using existing machine configuration
	I1030 20:06:18.946964  454274 fix.go:54] fixHost starting: 
	I1030 20:06:18.947245  454274 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:06:18.947284  454274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:06:18.962033  454274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I1030 20:06:18.962499  454274 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:06:18.962985  454274 main.go:141] libmachine: Using API Version  1
	I1030 20:06:18.963006  454274 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:06:18.963335  454274 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:06:18.963517  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:06:18.963658  454274 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:06:18.965172  454274 fix.go:112] recreateIfNeeded on newest-cni-467894: state=Stopped err=<nil>
	I1030 20:06:18.965193  454274 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	W1030 20:06:18.965380  454274 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 20:06:18.968240  454274 out.go:177] * Restarting existing kvm2 VM for "newest-cni-467894" ...
	
	
	==> CRI-O <==
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.327281835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318780327223535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e050566-f1b7-4657-9c77-fb69ce781736 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.327887570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5edeefd7-dc2b-4810-995a-05bfc3340dfa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.327960229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5edeefd7-dc2b-4810-995a-05bfc3340dfa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.328231925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5edeefd7-dc2b-4810-995a-05bfc3340dfa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.368771241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfaf7b13-5167-4c4b-8161-505eab6d3b64 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.368851077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfaf7b13-5167-4c4b-8161-505eab6d3b64 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.370330094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c17d5e54-32f0-44e1-985e-2a29179e4339 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.370718639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318780370695407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c17d5e54-32f0-44e1-985e-2a29179e4339 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.371392633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ddc0272-6cc7-472c-bc94-2f155ee643fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.371495037Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ddc0272-6cc7-472c-bc94-2f155ee643fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.371699285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ddc0272-6cc7-472c-bc94-2f155ee643fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.417276598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2911416c-147f-45cb-856a-a52428970f6b name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.417398723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2911416c-147f-45cb-856a-a52428970f6b name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.418681921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8529025a-86cd-4303-b6f7-2fb05f7e85a7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.419409964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318780419384275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8529025a-86cd-4303-b6f7-2fb05f7e85a7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.420204926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40503f7e-614e-4223-851f-2b6def510322 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.420276672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40503f7e-614e-4223-851f-2b6def510322 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.420462680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40503f7e-614e-4223-851f-2b6def510322 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.455372131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfe4ca30-90cc-41f0-a6b4-1ef31b252980 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.455442806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfe4ca30-90cc-41f0-a6b4-1ef31b252980 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.457033458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6f9339e-7451-4a32-a153-300e9505eab4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.458025603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318780458001003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6f9339e-7451-4a32-a153-300e9505eab4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.458601377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61d5a8ae-8e73-4d2a-a396-d97469f2bf8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.458654843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61d5a8ae-8e73-4d2a-a396-d97469f2bf8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:06:20 embed-certs-042402 crio[720]: time="2024-10-30 20:06:20.458851437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8,PodSandboxId:37da61aca6f683480fb99bf53a21816a25a25564141953ce658162067fb347d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317855012939746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729733b2-e703-4e9b-9d05-a2f0fb632149,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea,PodSandboxId:851a935cd845bfabffc15bc713fbc07cae49edb049ced591ba603e1bc0794a9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317854081451711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzbpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f486ff4-c665-42ec-9b98-15ea94e0ded8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4,PodSandboxId:a56f02ca0c0cf146284c1236448311d49a36bab9b9711462cff3fc58a189a67d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317853859330640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hvg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
9e7e143-3e12-4c1a-9fb0-6f58a37f8a55,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab,PodSandboxId:a6d8ba30e9d3d30abde4f2366d7a9ef2ecc453287e38a5f70f584d1d67175abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730317853251578737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9zwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b6fb8b-2287-47c0-b9c8-a3b1c3020894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d,PodSandboxId:ea0cbb84555bcdf5619bd4d47fd11922df9e05cd9969a2a4a145c54008b14d30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317842355050944
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf8c324f2a6f1da2c876b3a18a138983,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287,PodSandboxId:0b07dad9e29e8faf367cd15ed328a0ae52dcf0dd9cbce15d670c32b4ea06303e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317842358
288211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183c2b3d20119c6a2c9de773d190a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996,PodSandboxId:d680408625033a4b973f11e339e07db29c11c92f637cb169129fa5e3bcf5210a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317842332395999,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a65eb1b79c9738a51bbee9c5c6a205a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0,PodSandboxId:ab0957dbbda0bc3b50a55f95c577df64a7f200f938858cf81ae28f7d9732a03d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317842266315839,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79,PodSandboxId:f96d997ce5136a695a2a6c9717f8fb68371c0d5a0473d71e22de5f155be8bfa5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730317554573429547,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-042402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ae65a89bb298ef7ccfc20cd482e7cf4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61d5a8ae-8e73-4d2a-a396-d97469f2bf8e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e6cc7d4df0e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   37da61aca6f68       storage-provisioner
	c5f74c108f82b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   851a935cd845b       coredns-7c65d6cfc9-pzbpd
	eace9317a4bc7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   a56f02ca0c0cf       coredns-7c65d6cfc9-hvg4g
	09f26f80fafe4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 minutes ago      Running             kube-proxy                0                   a6d8ba30e9d3d       kube-proxy-m9zwz
	1f4743cfe95c8       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   15 minutes ago      Running             kube-scheduler            2                   0b07dad9e29e8       kube-scheduler-embed-certs-042402
	d23071dddfccc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   15 minutes ago      Running             kube-controller-manager   2                   ea0cbb84555bc       kube-controller-manager-embed-certs-042402
	9d09f07a6c8f7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   d680408625033       etcd-embed-certs-042402
	5b6cf7bbc2230       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   15 minutes ago      Running             kube-apiserver            2                   ab0957dbbda0b       kube-apiserver-embed-certs-042402
	1dfb8854a7f88       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   20 minutes ago      Exited              kube-apiserver            1                   f96d997ce5136       kube-apiserver-embed-certs-042402
	
	
	==> coredns [c5f74c108f82b2ed307aa632f5ac93b38ee44e820f09e472c783b88fc3e85dea] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [eace9317a4bc772273e495ce19c996a3e05eacd40bef6339365ea262ac446bd4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-042402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-042402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=embed-certs-042402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 19:50:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-042402
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 20:06:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 20:06:15 +0000   Wed, 30 Oct 2024 19:50:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 20:06:15 +0000   Wed, 30 Oct 2024 19:50:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 20:06:15 +0000   Wed, 30 Oct 2024 19:50:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 20:06:15 +0000   Wed, 30 Oct 2024 19:50:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.235
	  Hostname:    embed-certs-042402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b38f1898611467081180a343ba5f2f3
	  System UUID:                6b38f189-8611-4670-8118-0a343ba5f2f3
	  Boot ID:                    cb97e997-3bf1-43f8-aad2-b3cee029cc5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-hvg4g                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-pzbpd                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-042402                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-042402             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-042402    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-m9zwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-042402             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-6hrq4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-042402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-042402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-042402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-042402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-042402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-042402 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-042402 event: Registered Node embed-certs-042402 in Controller
	
	
	==> dmesg <==
	[  +0.058977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039851] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982106] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.555406] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.247473] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.060467] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066660] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.200258] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.197130] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.317090] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.259400] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.060185] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.420129] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +4.592050] kauditd_printk_skb: 97 callbacks suppressed
	[Oct30 19:46] kauditd_printk_skb: 85 callbacks suppressed
	[Oct30 19:50] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.255624] systemd-fstab-generator[2605]: Ignoring "noauto" option for root device
	[  +4.574996] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.470330] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +5.920576] systemd-fstab-generator[3081]: Ignoring "noauto" option for root device
	[  +0.025536] kauditd_printk_skb: 14 callbacks suppressed
	[Oct30 19:51] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [9d09f07a6c8f7bfb428c44c79f6ad7dbcc4cf844fbb81a263172664f89475996] <==
	{"level":"info","ts":"2024-10-30T19:50:43.040884Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:50:43.072561Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.235:2379"}
	{"level":"info","ts":"2024-10-30T19:50:43.034167Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-30T19:50:43.072799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-30T19:50:43.067161Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T19:50:43.073024Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T19:50:43.073130Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T20:00:43.461953Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-10-30T20:00:43.471969Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":714,"took":"9.39162ms","hash":1934636432,"current-db-size-bytes":2342912,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2342912,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-10-30T20:00:43.472128Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1934636432,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-10-30T20:05:43.472047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2024-10-30T20:05:43.477478Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":957,"took":"4.548383ms","hash":2410507946,"current-db-size-bytes":2342912,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-30T20:05:43.477576Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2410507946,"revision":957,"compact-revision":714}
	{"level":"info","ts":"2024-10-30T20:05:52.146982Z","caller":"traceutil/trace.go:171","msg":"trace[1412490872] linearizableReadLoop","detail":"{readStateIndex:1401; appliedIndex:1400; }","duration":"352.004119ms","start":"2024-10-30T20:05:51.794944Z","end":"2024-10-30T20:05:52.146948Z","steps":["trace[1412490872] 'read index received'  (duration: 351.64002ms)","trace[1412490872] 'applied index is now lower than readState.Index'  (duration: 363.452µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-30T20:05:52.147580Z","caller":"traceutil/trace.go:171","msg":"trace[33259838] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"361.104835ms","start":"2024-10-30T20:05:51.786455Z","end":"2024-10-30T20:05:52.147560Z","steps":["trace[33259838] 'process raft request'  (duration: 360.23237ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:05:52.150136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T20:05:51.786440Z","time spent":"362.956345ms","remote":"127.0.0.1:49674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1207 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-30T20:05:52.147741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.713383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:05:52.150401Z","caller":"traceutil/trace.go:171","msg":"trace[1893060502] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1208; }","duration":"355.451161ms","start":"2024-10-30T20:05:51.794940Z","end":"2024-10-30T20:05:52.150391Z","steps":["trace[1893060502] 'agreement among raft nodes before linearized reading'  (duration: 352.680633ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:05:52.150462Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T20:05:51.794905Z","time spent":"355.547419ms","remote":"127.0.0.1:49504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-30T20:05:52.441890Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.299506ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4147695268222548530 > lease_revoke:<id:398f92defaf939d0>","response":"size:28"}
	{"level":"info","ts":"2024-10-30T20:05:52.441986Z","caller":"traceutil/trace.go:171","msg":"trace[749628583] linearizableReadLoop","detail":"{readStateIndex:1402; appliedIndex:1401; }","duration":"289.033727ms","start":"2024-10-30T20:05:52.152939Z","end":"2024-10-30T20:05:52.441972Z","steps":["trace[749628583] 'read index received'  (duration: 46.477044ms)","trace[749628583] 'applied index is now lower than readState.Index'  (duration: 242.555755ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-30T20:05:52.442161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.210572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:05:52.442204Z","caller":"traceutil/trace.go:171","msg":"trace[1011664986] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1208; }","duration":"289.260425ms","start":"2024-10-30T20:05:52.152935Z","end":"2024-10-30T20:05:52.442196Z","steps":["trace[1011664986] 'agreement among raft nodes before linearized reading'  (duration: 289.134393ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:05:52.442315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.403608ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:05:52.442439Z","caller":"traceutil/trace.go:171","msg":"trace[1655811497] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1208; }","duration":"110.53797ms","start":"2024-10-30T20:05:52.331891Z","end":"2024-10-30T20:05:52.442429Z","steps":["trace[1655811497] 'agreement among raft nodes before linearized reading'  (duration: 110.387072ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:06:20 up 20 min,  0 users,  load average: 1.04, 0.34, 0.17
	Linux embed-certs-042402 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1dfb8854a7f882cfd95b81fea92e7c1c4fdc0deeb1fdb0fc5a2c137e9c7bfe79] <==
	W1030 19:50:34.917966       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.927756       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.936582       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.937973       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:34.967965       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.045802       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.070475       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.103530       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.113277       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.152356       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.187501       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.208716       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.322581       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.399923       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.465480       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:35.631534       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:38.310591       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:38.649421       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:38.835846       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.013431       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.095769       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.213536       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.493627       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.702526       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1030 19:50:39.709989       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5b6cf7bbc2230a6b582b198b93c163531a59f107f74a5acdde62e4e4a633bcc0] <==
	I1030 20:01:45.984452       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:01:45.985477       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 20:03:45.985744       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:03:45.985937       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 20:03:45.985977       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:03:45.986011       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 20:03:45.987133       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:03:45.987190       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 20:05:44.984949       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:05:44.985375       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1030 20:05:45.987478       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:05:45.987576       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1030 20:05:45.987717       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:05:45.987937       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1030 20:05:45.988694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:05:45.989953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d23071dddfccc3765d928cfe43b21550ee595642dfeba5139c84ff98b9cdd93d] <==
	I1030 20:00:52.486542       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:01:10.035187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-042402"
	E1030 20:01:22.040558       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:01:22.495398       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:01:52.047487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:01:52.504018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:02:05.519018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="134.838µs"
	I1030 20:02:20.515509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="82.311µs"
	E1030 20:02:22.053970       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:02:22.512201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:02:52.062641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:02:52.520059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:03:22.069285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:03:22.529989       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:03:52.076775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:03:52.537467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:04:22.082869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:04:22.545006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:04:52.090365       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:04:52.553328       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:05:22.096947       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:05:22.561518       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:05:52.105503       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:05:52.570667       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:06:15.993630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-042402"
	
	
	==> kube-proxy [09f26f80fafe43090b0174ce499bbe5a1e15cd0da11f6685eaf6a6faf4063eab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 19:50:53.723570       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 19:50:53.744038       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.235"]
	E1030 19:50:53.744143       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 19:50:53.873959       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 19:50:53.873993       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 19:50:53.874029       1 server_linux.go:169] "Using iptables Proxier"
	I1030 19:50:53.877302       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 19:50:53.879299       1 server.go:483] "Version info" version="v1.31.2"
	I1030 19:50:53.879486       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:50:53.883582       1 config.go:199] "Starting service config controller"
	I1030 19:50:53.884494       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 19:50:53.884573       1 config.go:105] "Starting endpoint slice config controller"
	I1030 19:50:53.884579       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 19:50:53.893791       1 config.go:328] "Starting node config controller"
	I1030 19:50:53.893811       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 19:50:53.987208       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 19:50:53.987251       1 shared_informer.go:320] Caches are synced for service config
	I1030 19:50:53.994607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f4743cfe95c89eb417fe14d8ed4ea606b29e6e483908af0d689d1481f067287] <==
	E1030 19:50:44.967929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:44.965818       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 19:50:44.967980       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1030 19:50:44.966020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:44.968030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:44.966163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1030 19:50:44.968143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:44.966932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 19:50:44.968197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1030 19:50:44.968247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:45.956940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1030 19:50:45.957013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.033423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.033518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.039888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.039978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.116800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 19:50:46.116926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.122530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.122645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1030 19:50:46.129415       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 19:50:46.129456       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1030 19:50:46.170404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 19:50:46.170711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1030 19:50:47.858553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 20:05:13 embed-certs-042402 kubelet[2932]: E1030 20:05:13.501250    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 20:05:17 embed-certs-042402 kubelet[2932]: E1030 20:05:17.771643    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318717771052783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:17 embed-certs-042402 kubelet[2932]: E1030 20:05:17.772141    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318717771052783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:25 embed-certs-042402 kubelet[2932]: E1030 20:05:25.500726    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 20:05:27 embed-certs-042402 kubelet[2932]: E1030 20:05:27.774424    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318727773973125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:27 embed-certs-042402 kubelet[2932]: E1030 20:05:27.774704    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318727773973125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:37 embed-certs-042402 kubelet[2932]: E1030 20:05:37.776465    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318737776195026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:37 embed-certs-042402 kubelet[2932]: E1030 20:05:37.776494    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318737776195026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:39 embed-certs-042402 kubelet[2932]: E1030 20:05:39.502661    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 20:05:47 embed-certs-042402 kubelet[2932]: E1030 20:05:47.529059    2932 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 20:05:47 embed-certs-042402 kubelet[2932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 20:05:47 embed-certs-042402 kubelet[2932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 20:05:47 embed-certs-042402 kubelet[2932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 20:05:47 embed-certs-042402 kubelet[2932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 20:05:47 embed-certs-042402 kubelet[2932]: E1030 20:05:47.778934    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318747778490821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:47 embed-certs-042402 kubelet[2932]: E1030 20:05:47.779018    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318747778490821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:52 embed-certs-042402 kubelet[2932]: E1030 20:05:52.500951    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 20:05:57 embed-certs-042402 kubelet[2932]: E1030 20:05:57.781319    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318757780806343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:57 embed-certs-042402 kubelet[2932]: E1030 20:05:57.781360    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318757780806343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:05 embed-certs-042402 kubelet[2932]: E1030 20:06:05.500536    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 20:06:07 embed-certs-042402 kubelet[2932]: E1030 20:06:07.782877    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318767782468165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:07 embed-certs-042402 kubelet[2932]: E1030 20:06:07.782924    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318767782468165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:16 embed-certs-042402 kubelet[2932]: E1030 20:06:16.500915    2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hrq4" podUID="a5bb1778-0a28-4649-a2ac-a5f0e1b810de"
	Oct 30 20:06:17 embed-certs-042402 kubelet[2932]: E1030 20:06:17.784845    2932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318777784489810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:06:17 embed-certs-042402 kubelet[2932]: E1030 20:06:17.784888    2932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318777784489810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0e6cc7d4df0e8a817cf435a8e959a096091802af481b013e11aafaf9d1b46af8] <==
	I1030 19:50:55.134044       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 19:50:55.142778       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 19:50:55.143208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 19:50:55.153026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 19:50:55.153984       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-042402_fc026841-e592-4d89-8391-54aa6923c56d!
	I1030 19:50:55.157063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2a93d23-4155-4c88-9cb9-f90384df7a5c", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-042402_fc026841-e592-4d89-8391-54aa6923c56d became leader
	I1030 19:50:55.254775       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-042402_fc026841-e592-4d89-8391-54aa6923c56d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-042402 -n embed-certs-042402
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-042402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-6hrq4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-042402 describe pod metrics-server-6867b74b74-6hrq4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-042402 describe pod metrics-server-6867b74b74-6hrq4: exit status 1 (62.570831ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-6hrq4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-042402 describe pod metrics-server-6867b74b74-6hrq4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (373.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-960512 -n no-preload-960512
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-30 20:05:49.557558384 +0000 UTC m=+6306.304741923
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-960512 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-960512 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.193µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-960512 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-960512 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-960512 logs -n 25: (3.895365699s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo find                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo crio                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-534248                                       | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC | 30 Oct 24 20:05 UTC |
	| start   | -p newest-cni-467894 --memory=2200 --alsologtostderr   | newest-cni-467894            | jenkins | v1.34.0 | 30 Oct 24 20:05 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 20:05:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 20:05:20.092087  453539 out.go:345] Setting OutFile to fd 1 ...
	I1030 20:05:20.092380  453539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 20:05:20.092390  453539 out.go:358] Setting ErrFile to fd 2...
	I1030 20:05:20.092395  453539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 20:05:20.092572  453539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 20:05:20.093205  453539 out.go:352] Setting JSON to false
	I1030 20:05:20.094423  453539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13663,"bootTime":1730305057,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 20:05:20.094601  453539 start.go:139] virtualization: kvm guest
	I1030 20:05:20.097494  453539 out.go:177] * [newest-cni-467894] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 20:05:20.098711  453539 notify.go:220] Checking for updates...
	I1030 20:05:20.098731  453539 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 20:05:20.100076  453539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 20:05:20.101244  453539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 20:05:20.102430  453539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 20:05:20.103667  453539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 20:05:20.104798  453539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 20:05:20.106400  453539 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:05:20.106556  453539 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:05:20.106684  453539 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:05:20.106800  453539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 20:05:20.147403  453539 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 20:05:20.148580  453539 start.go:297] selected driver: kvm2
	I1030 20:05:20.148593  453539 start.go:901] validating driver "kvm2" against <nil>
	I1030 20:05:20.148607  453539 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 20:05:20.149357  453539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 20:05:20.149436  453539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 20:05:20.165165  453539 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 20:05:20.165211  453539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1030 20:05:20.165278  453539 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1030 20:05:20.165555  453539 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1030 20:05:20.165592  453539 cni.go:84] Creating CNI manager for ""
	I1030 20:05:20.165664  453539 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 20:05:20.165677  453539 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 20:05:20.165772  453539 start.go:340] cluster config:
	{Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-467894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 20:05:20.165920  453539 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 20:05:20.167813  453539 out.go:177] * Starting "newest-cni-467894" primary control-plane node in "newest-cni-467894" cluster
	I1030 20:05:20.168932  453539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 20:05:20.168962  453539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 20:05:20.168971  453539 cache.go:56] Caching tarball of preloaded images
	I1030 20:05:20.169061  453539 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 20:05:20.169076  453539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1030 20:05:20.169177  453539 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/config.json ...
	I1030 20:05:20.169203  453539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/config.json: {Name:mkd026f59b99883af92f5d990cc2e058d7b4716d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 20:05:20.169373  453539 start.go:360] acquireMachinesLock for newest-cni-467894: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 20:05:20.169407  453539 start.go:364] duration metric: took 18.947µs to acquireMachinesLock for "newest-cni-467894"
	I1030 20:05:20.169431  453539 start.go:93] Provisioning new machine with config: &{Name:newest-cni-467894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-467894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 20:05:20.169526  453539 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 20:05:20.171070  453539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 20:05:20.171205  453539 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 20:05:20.171260  453539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 20:05:20.185720  453539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I1030 20:05:20.186225  453539 main.go:141] libmachine: () Calling .GetVersion
	I1030 20:05:20.186800  453539 main.go:141] libmachine: Using API Version  1
	I1030 20:05:20.186822  453539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 20:05:20.187251  453539 main.go:141] libmachine: () Calling .GetMachineName
	I1030 20:05:20.187473  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetMachineName
	I1030 20:05:20.187653  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:20.187871  453539 start.go:159] libmachine.API.Create for "newest-cni-467894" (driver="kvm2")
	I1030 20:05:20.187923  453539 client.go:168] LocalClient.Create starting
	I1030 20:05:20.187960  453539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem
	I1030 20:05:20.188006  453539 main.go:141] libmachine: Decoding PEM data...
	I1030 20:05:20.188031  453539 main.go:141] libmachine: Parsing certificate...
	I1030 20:05:20.188101  453539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem
	I1030 20:05:20.188136  453539 main.go:141] libmachine: Decoding PEM data...
	I1030 20:05:20.188157  453539 main.go:141] libmachine: Parsing certificate...
	I1030 20:05:20.188183  453539 main.go:141] libmachine: Running pre-create checks...
	I1030 20:05:20.188201  453539 main.go:141] libmachine: (newest-cni-467894) Calling .PreCreateCheck
	I1030 20:05:20.188576  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetConfigRaw
	I1030 20:05:20.188983  453539 main.go:141] libmachine: Creating machine...
	I1030 20:05:20.188998  453539 main.go:141] libmachine: (newest-cni-467894) Calling .Create
	I1030 20:05:20.189129  453539 main.go:141] libmachine: (newest-cni-467894) Creating KVM machine...
	I1030 20:05:20.190233  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found existing default KVM network
	I1030 20:05:20.191499  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:20.191347  453562 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:4f:9f} reservation:<nil>}
	I1030 20:05:20.192598  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:20.192519  453562 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028a970}
	I1030 20:05:20.192615  453539 main.go:141] libmachine: (newest-cni-467894) DBG | created network xml: 
	I1030 20:05:20.192626  453539 main.go:141] libmachine: (newest-cni-467894) DBG | <network>
	I1030 20:05:20.192641  453539 main.go:141] libmachine: (newest-cni-467894) DBG |   <name>mk-newest-cni-467894</name>
	I1030 20:05:20.192651  453539 main.go:141] libmachine: (newest-cni-467894) DBG |   <dns enable='no'/>
	I1030 20:05:20.192661  453539 main.go:141] libmachine: (newest-cni-467894) DBG |   
	I1030 20:05:20.192672  453539 main.go:141] libmachine: (newest-cni-467894) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1030 20:05:20.192683  453539 main.go:141] libmachine: (newest-cni-467894) DBG |     <dhcp>
	I1030 20:05:20.192695  453539 main.go:141] libmachine: (newest-cni-467894) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1030 20:05:20.192709  453539 main.go:141] libmachine: (newest-cni-467894) DBG |     </dhcp>
	I1030 20:05:20.192719  453539 main.go:141] libmachine: (newest-cni-467894) DBG |   </ip>
	I1030 20:05:20.192735  453539 main.go:141] libmachine: (newest-cni-467894) DBG |   
	I1030 20:05:20.192744  453539 main.go:141] libmachine: (newest-cni-467894) DBG | </network>
	I1030 20:05:20.192751  453539 main.go:141] libmachine: (newest-cni-467894) DBG | 
	I1030 20:05:20.198209  453539 main.go:141] libmachine: (newest-cni-467894) DBG | trying to create private KVM network mk-newest-cni-467894 192.168.50.0/24...
	I1030 20:05:20.267601  453539 main.go:141] libmachine: (newest-cni-467894) DBG | private KVM network mk-newest-cni-467894 192.168.50.0/24 created
	I1030 20:05:20.267662  453539 main.go:141] libmachine: (newest-cni-467894) Setting up store path in /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894 ...
	I1030 20:05:20.267684  453539 main.go:141] libmachine: (newest-cni-467894) Building disk image from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 20:05:20.267696  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:20.267550  453562 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 20:05:20.267763  453539 main.go:141] libmachine: (newest-cni-467894) Downloading /home/jenkins/minikube-integration/19883-381834/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1030 20:05:20.572347  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:20.572160  453562 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa...
	I1030 20:05:20.768855  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:20.768702  453562 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/newest-cni-467894.rawdisk...
	I1030 20:05:20.768892  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Writing magic tar header
	I1030 20:05:20.768909  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Writing SSH key tar header
	I1030 20:05:20.768920  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:20.768844  453562 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894 ...
	I1030 20:05:20.768948  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894
	I1030 20:05:20.769023  453539 main.go:141] libmachine: (newest-cni-467894) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894 (perms=drwx------)
	I1030 20:05:20.769060  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube/machines
	I1030 20:05:20.769076  453539 main.go:141] libmachine: (newest-cni-467894) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube/machines (perms=drwxr-xr-x)
	I1030 20:05:20.769112  453539 main.go:141] libmachine: (newest-cni-467894) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834/.minikube (perms=drwxr-xr-x)
	I1030 20:05:20.769125  453539 main.go:141] libmachine: (newest-cni-467894) Setting executable bit set on /home/jenkins/minikube-integration/19883-381834 (perms=drwxrwxr-x)
	I1030 20:05:20.769136  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 20:05:20.769149  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19883-381834
	I1030 20:05:20.769159  453539 main.go:141] libmachine: (newest-cni-467894) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 20:05:20.769172  453539 main.go:141] libmachine: (newest-cni-467894) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 20:05:20.769182  453539 main.go:141] libmachine: (newest-cni-467894) Creating domain...
	I1030 20:05:20.769195  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 20:05:20.769210  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Checking permissions on dir: /home/jenkins
	I1030 20:05:20.769221  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Checking permissions on dir: /home
	I1030 20:05:20.769230  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Skipping /home - not owner
	I1030 20:05:20.770339  453539 main.go:141] libmachine: (newest-cni-467894) define libvirt domain using xml: 
	I1030 20:05:20.770363  453539 main.go:141] libmachine: (newest-cni-467894) <domain type='kvm'>
	I1030 20:05:20.770372  453539 main.go:141] libmachine: (newest-cni-467894)   <name>newest-cni-467894</name>
	I1030 20:05:20.770379  453539 main.go:141] libmachine: (newest-cni-467894)   <memory unit='MiB'>2200</memory>
	I1030 20:05:20.770387  453539 main.go:141] libmachine: (newest-cni-467894)   <vcpu>2</vcpu>
	I1030 20:05:20.770397  453539 main.go:141] libmachine: (newest-cni-467894)   <features>
	I1030 20:05:20.770409  453539 main.go:141] libmachine: (newest-cni-467894)     <acpi/>
	I1030 20:05:20.770418  453539 main.go:141] libmachine: (newest-cni-467894)     <apic/>
	I1030 20:05:20.770430  453539 main.go:141] libmachine: (newest-cni-467894)     <pae/>
	I1030 20:05:20.770439  453539 main.go:141] libmachine: (newest-cni-467894)     
	I1030 20:05:20.770447  453539 main.go:141] libmachine: (newest-cni-467894)   </features>
	I1030 20:05:20.770463  453539 main.go:141] libmachine: (newest-cni-467894)   <cpu mode='host-passthrough'>
	I1030 20:05:20.770475  453539 main.go:141] libmachine: (newest-cni-467894)   
	I1030 20:05:20.770496  453539 main.go:141] libmachine: (newest-cni-467894)   </cpu>
	I1030 20:05:20.770523  453539 main.go:141] libmachine: (newest-cni-467894)   <os>
	I1030 20:05:20.770546  453539 main.go:141] libmachine: (newest-cni-467894)     <type>hvm</type>
	I1030 20:05:20.770555  453539 main.go:141] libmachine: (newest-cni-467894)     <boot dev='cdrom'/>
	I1030 20:05:20.770560  453539 main.go:141] libmachine: (newest-cni-467894)     <boot dev='hd'/>
	I1030 20:05:20.770566  453539 main.go:141] libmachine: (newest-cni-467894)     <bootmenu enable='no'/>
	I1030 20:05:20.770572  453539 main.go:141] libmachine: (newest-cni-467894)   </os>
	I1030 20:05:20.770578  453539 main.go:141] libmachine: (newest-cni-467894)   <devices>
	I1030 20:05:20.770585  453539 main.go:141] libmachine: (newest-cni-467894)     <disk type='file' device='cdrom'>
	I1030 20:05:20.770592  453539 main.go:141] libmachine: (newest-cni-467894)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/boot2docker.iso'/>
	I1030 20:05:20.770599  453539 main.go:141] libmachine: (newest-cni-467894)       <target dev='hdc' bus='scsi'/>
	I1030 20:05:20.770606  453539 main.go:141] libmachine: (newest-cni-467894)       <readonly/>
	I1030 20:05:20.770615  453539 main.go:141] libmachine: (newest-cni-467894)     </disk>
	I1030 20:05:20.770624  453539 main.go:141] libmachine: (newest-cni-467894)     <disk type='file' device='disk'>
	I1030 20:05:20.770654  453539 main.go:141] libmachine: (newest-cni-467894)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 20:05:20.770671  453539 main.go:141] libmachine: (newest-cni-467894)       <source file='/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/newest-cni-467894.rawdisk'/>
	I1030 20:05:20.770679  453539 main.go:141] libmachine: (newest-cni-467894)       <target dev='hda' bus='virtio'/>
	I1030 20:05:20.770684  453539 main.go:141] libmachine: (newest-cni-467894)     </disk>
	I1030 20:05:20.770691  453539 main.go:141] libmachine: (newest-cni-467894)     <interface type='network'>
	I1030 20:05:20.770699  453539 main.go:141] libmachine: (newest-cni-467894)       <source network='mk-newest-cni-467894'/>
	I1030 20:05:20.770709  453539 main.go:141] libmachine: (newest-cni-467894)       <model type='virtio'/>
	I1030 20:05:20.770733  453539 main.go:141] libmachine: (newest-cni-467894)     </interface>
	I1030 20:05:20.770754  453539 main.go:141] libmachine: (newest-cni-467894)     <interface type='network'>
	I1030 20:05:20.770764  453539 main.go:141] libmachine: (newest-cni-467894)       <source network='default'/>
	I1030 20:05:20.770777  453539 main.go:141] libmachine: (newest-cni-467894)       <model type='virtio'/>
	I1030 20:05:20.770794  453539 main.go:141] libmachine: (newest-cni-467894)     </interface>
	I1030 20:05:20.770811  453539 main.go:141] libmachine: (newest-cni-467894)     <serial type='pty'>
	I1030 20:05:20.770824  453539 main.go:141] libmachine: (newest-cni-467894)       <target port='0'/>
	I1030 20:05:20.770834  453539 main.go:141] libmachine: (newest-cni-467894)     </serial>
	I1030 20:05:20.770845  453539 main.go:141] libmachine: (newest-cni-467894)     <console type='pty'>
	I1030 20:05:20.770856  453539 main.go:141] libmachine: (newest-cni-467894)       <target type='serial' port='0'/>
	I1030 20:05:20.770866  453539 main.go:141] libmachine: (newest-cni-467894)     </console>
	I1030 20:05:20.770877  453539 main.go:141] libmachine: (newest-cni-467894)     <rng model='virtio'>
	I1030 20:05:20.770890  453539 main.go:141] libmachine: (newest-cni-467894)       <backend model='random'>/dev/random</backend>
	I1030 20:05:20.770906  453539 main.go:141] libmachine: (newest-cni-467894)     </rng>
	I1030 20:05:20.770916  453539 main.go:141] libmachine: (newest-cni-467894)     
	I1030 20:05:20.770925  453539 main.go:141] libmachine: (newest-cni-467894)     
	I1030 20:05:20.770935  453539 main.go:141] libmachine: (newest-cni-467894)   </devices>
	I1030 20:05:20.770943  453539 main.go:141] libmachine: (newest-cni-467894) </domain>
	I1030 20:05:20.770978  453539 main.go:141] libmachine: (newest-cni-467894) 
	I1030 20:05:20.775604  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:4a:7e:b9 in network default
	I1030 20:05:20.776278  453539 main.go:141] libmachine: (newest-cni-467894) Ensuring networks are active...
	I1030 20:05:20.776299  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:20.776928  453539 main.go:141] libmachine: (newest-cni-467894) Ensuring network default is active
	I1030 20:05:20.777241  453539 main.go:141] libmachine: (newest-cni-467894) Ensuring network mk-newest-cni-467894 is active
	I1030 20:05:20.777737  453539 main.go:141] libmachine: (newest-cni-467894) Getting domain xml...
	I1030 20:05:20.778505  453539 main.go:141] libmachine: (newest-cni-467894) Creating domain...
	I1030 20:05:22.039148  453539 main.go:141] libmachine: (newest-cni-467894) Waiting to get IP...
	I1030 20:05:22.039894  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:22.040300  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:22.040356  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:22.040281  453562 retry.go:31] will retry after 244.093691ms: waiting for machine to come up
	I1030 20:05:22.285888  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:22.286431  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:22.286460  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:22.286387  453562 retry.go:31] will retry after 244.920318ms: waiting for machine to come up
	I1030 20:05:22.532563  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:22.532979  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:22.533007  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:22.532922  453562 retry.go:31] will retry after 372.141697ms: waiting for machine to come up
	I1030 20:05:22.906216  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:22.906730  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:22.906762  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:22.906676  453562 retry.go:31] will retry after 533.647793ms: waiting for machine to come up
	I1030 20:05:23.442358  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:23.442850  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:23.442881  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:23.442790  453562 retry.go:31] will retry after 463.551871ms: waiting for machine to come up
	I1030 20:05:23.908562  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:23.909236  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:23.909260  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:23.909186  453562 retry.go:31] will retry after 842.975006ms: waiting for machine to come up
	I1030 20:05:24.753739  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:24.754223  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:24.754251  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:24.754165  453562 retry.go:31] will retry after 770.069968ms: waiting for machine to come up
	I1030 20:05:25.525962  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:25.526501  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:25.526533  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:25.526415  453562 retry.go:31] will retry after 1.007563709s: waiting for machine to come up
	I1030 20:05:26.535683  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:26.536246  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:26.536291  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:26.536186  453562 retry.go:31] will retry after 1.415814716s: waiting for machine to come up
	I1030 20:05:27.953605  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:27.954040  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:27.954063  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:27.953979  453562 retry.go:31] will retry after 2.14180379s: waiting for machine to come up
	I1030 20:05:30.097797  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:30.098243  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:30.098277  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:30.098203  453562 retry.go:31] will retry after 1.853406874s: waiting for machine to come up
	I1030 20:05:31.952891  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:31.953385  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:31.953417  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:31.953320  453562 retry.go:31] will retry after 2.325362203s: waiting for machine to come up
	I1030 20:05:34.280629  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:34.281057  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:34.281084  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:34.280978  453562 retry.go:31] will retry after 2.998007322s: waiting for machine to come up
	I1030 20:05:37.281201  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:37.281701  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find current IP address of domain newest-cni-467894 in network mk-newest-cni-467894
	I1030 20:05:37.281729  453539 main.go:141] libmachine: (newest-cni-467894) DBG | I1030 20:05:37.281667  453562 retry.go:31] will retry after 5.274221452s: waiting for machine to come up
	I1030 20:05:42.558742  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.559232  453539 main.go:141] libmachine: (newest-cni-467894) Found IP for machine: 192.168.50.214
	I1030 20:05:42.559255  453539 main.go:141] libmachine: (newest-cni-467894) Reserving static IP address...
	I1030 20:05:42.559292  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has current primary IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.559624  453539 main.go:141] libmachine: (newest-cni-467894) DBG | unable to find host DHCP lease matching {name: "newest-cni-467894", mac: "52:54:00:7b:de:75", ip: "192.168.50.214"} in network mk-newest-cni-467894
	I1030 20:05:42.635440  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Getting to WaitForSSH function...
	I1030 20:05:42.635471  453539 main.go:141] libmachine: (newest-cni-467894) Reserved static IP address: 192.168.50.214
	I1030 20:05:42.635484  453539 main.go:141] libmachine: (newest-cni-467894) Waiting for SSH to be available...
	I1030 20:05:42.638262  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.638750  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:42.638779  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.638912  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Using SSH client type: external
	I1030 20:05:42.638942  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa (-rw-------)
	I1030 20:05:42.638973  453539 main.go:141] libmachine: (newest-cni-467894) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 20:05:42.638989  453539 main.go:141] libmachine: (newest-cni-467894) DBG | About to run SSH command:
	I1030 20:05:42.639001  453539 main.go:141] libmachine: (newest-cni-467894) DBG | exit 0
	I1030 20:05:42.770626  453539 main.go:141] libmachine: (newest-cni-467894) DBG | SSH cmd err, output: <nil>: 
	I1030 20:05:42.770859  453539 main.go:141] libmachine: (newest-cni-467894) KVM machine creation complete!
	I1030 20:05:42.771281  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetConfigRaw
	I1030 20:05:42.771856  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:42.772078  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:42.772279  453539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 20:05:42.772294  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetState
	I1030 20:05:42.773683  453539 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 20:05:42.773698  453539 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 20:05:42.773704  453539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 20:05:42.773712  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:42.776385  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.776827  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:42.776861  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.776961  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:42.777189  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:42.777399  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:42.777556  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:42.777768  453539 main.go:141] libmachine: Using SSH client type: native
	I1030 20:05:42.778011  453539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:05:42.778030  453539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 20:05:42.897911  453539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 20:05:42.897940  453539 main.go:141] libmachine: Detecting the provisioner...
	I1030 20:05:42.897951  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:42.901269  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.901771  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:42.901804  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:42.902010  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:42.902234  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:42.902427  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:42.902615  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:42.902782  453539 main.go:141] libmachine: Using SSH client type: native
	I1030 20:05:42.902959  453539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:05:42.902970  453539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 20:05:43.019351  453539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1030 20:05:43.019546  453539 main.go:141] libmachine: found compatible host: buildroot
	I1030 20:05:43.019566  453539 main.go:141] libmachine: Provisioning with buildroot...
	I1030 20:05:43.019578  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetMachineName
	I1030 20:05:43.019893  453539 buildroot.go:166] provisioning hostname "newest-cni-467894"
	I1030 20:05:43.019926  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetMachineName
	I1030 20:05:43.020144  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:43.022941  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.023411  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.023440  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.023579  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:43.023774  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.023929  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.024077  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:43.024265  453539 main.go:141] libmachine: Using SSH client type: native
	I1030 20:05:43.024452  453539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:05:43.024467  453539 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-467894 && echo "newest-cni-467894" | sudo tee /etc/hostname
	I1030 20:05:43.154593  453539 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-467894
	
	I1030 20:05:43.154620  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:43.157587  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.157916  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.157944  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.158123  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:43.158330  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.158504  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.158635  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:43.158784  453539 main.go:141] libmachine: Using SSH client type: native
	I1030 20:05:43.158956  453539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:05:43.158971  453539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-467894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-467894/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-467894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 20:05:43.288356  453539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 20:05:43.288390  453539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 20:05:43.288440  453539 buildroot.go:174] setting up certificates
	I1030 20:05:43.288465  453539 provision.go:84] configureAuth start
	I1030 20:05:43.288484  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetMachineName
	I1030 20:05:43.288786  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:05:43.291632  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.292030  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.292061  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.292187  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:43.294677  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.295003  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.295026  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.295134  453539 provision.go:143] copyHostCerts
	I1030 20:05:43.295235  453539 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 20:05:43.295264  453539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 20:05:43.295353  453539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 20:05:43.295476  453539 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 20:05:43.295486  453539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 20:05:43.295526  453539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 20:05:43.295607  453539 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 20:05:43.295616  453539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 20:05:43.295650  453539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 20:05:43.295720  453539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.newest-cni-467894 san=[127.0.0.1 192.168.50.214 localhost minikube newest-cni-467894]
	I1030 20:05:43.410708  453539 provision.go:177] copyRemoteCerts
	I1030 20:05:43.410813  453539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 20:05:43.410851  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:43.413531  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.413843  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.413868  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.414043  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:43.414223  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.414403  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:43.414595  453539 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:05:43.504888  453539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 20:05:43.528995  453539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 20:05:43.555680  453539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 20:05:43.579973  453539 provision.go:87] duration metric: took 291.485159ms to configureAuth
	I1030 20:05:43.580017  453539 buildroot.go:189] setting minikube options for container-runtime
	I1030 20:05:43.580216  453539 config.go:182] Loaded profile config "newest-cni-467894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 20:05:43.580325  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:43.583120  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.583533  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.583574  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.583711  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:43.583902  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.584073  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.584212  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:43.584440  453539 main.go:141] libmachine: Using SSH client type: native
	I1030 20:05:43.584610  453539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:05:43.584626  453539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 20:05:43.819653  453539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 20:05:43.819680  453539 main.go:141] libmachine: Checking connection to Docker...
	I1030 20:05:43.819691  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetURL
	I1030 20:05:43.820899  453539 main.go:141] libmachine: (newest-cni-467894) DBG | Using libvirt version 6000000
	I1030 20:05:43.823260  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.823683  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.823713  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.823875  453539 main.go:141] libmachine: Docker is up and running!
	I1030 20:05:43.823889  453539 main.go:141] libmachine: Reticulating splines...
	I1030 20:05:43.823897  453539 client.go:171] duration metric: took 23.635962373s to LocalClient.Create
	I1030 20:05:43.823926  453539 start.go:167] duration metric: took 23.636053827s to libmachine.API.Create "newest-cni-467894"
	I1030 20:05:43.823940  453539 start.go:293] postStartSetup for "newest-cni-467894" (driver="kvm2")
	I1030 20:05:43.823954  453539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 20:05:43.823989  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:43.824283  453539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 20:05:43.824328  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:43.826703  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.827081  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.827101  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.827395  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:43.827615  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.827768  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:43.827922  453539 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:05:43.917062  453539 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 20:05:43.921285  453539 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 20:05:43.921309  453539 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 20:05:43.921362  453539 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 20:05:43.921470  453539 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 20:05:43.921605  453539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 20:05:43.931059  453539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 20:05:43.956534  453539 start.go:296] duration metric: took 132.576264ms for postStartSetup
	I1030 20:05:43.956586  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetConfigRaw
	I1030 20:05:43.957274  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:05:43.960065  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.960500  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.960528  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.960790  453539 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/newest-cni-467894/config.json ...
	I1030 20:05:43.961026  453539 start.go:128] duration metric: took 23.791484506s to createHost
	I1030 20:05:43.961062  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:43.964020  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.964415  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:43.964442  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:43.964541  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:43.964721  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.964875  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:43.965015  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:43.965195  453539 main.go:141] libmachine: Using SSH client type: native
	I1030 20:05:43.965411  453539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I1030 20:05:43.965423  453539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 20:05:44.078988  453539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730318744.046556132
	
	I1030 20:05:44.079012  453539 fix.go:216] guest clock: 1730318744.046556132
	I1030 20:05:44.079020  453539 fix.go:229] Guest: 2024-10-30 20:05:44.046556132 +0000 UTC Remote: 2024-10-30 20:05:43.961043969 +0000 UTC m=+23.912058043 (delta=85.512163ms)
	I1030 20:05:44.079050  453539 fix.go:200] guest clock delta is within tolerance: 85.512163ms
	I1030 20:05:44.079058  453539 start.go:83] releasing machines lock for "newest-cni-467894", held for 23.909639284s
	I1030 20:05:44.079099  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:44.079419  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:05:44.081984  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:44.082319  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:44.082348  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:44.082463  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:44.083000  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:44.083214  453539 main.go:141] libmachine: (newest-cni-467894) Calling .DriverName
	I1030 20:05:44.083316  453539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 20:05:44.083360  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:44.083461  453539 ssh_runner.go:195] Run: cat /version.json
	I1030 20:05:44.083483  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHHostname
	I1030 20:05:44.085899  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:44.086046  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:44.086277  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:44.086308  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:44.086375  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:44.086397  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:44.086428  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:44.086604  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHPort
	I1030 20:05:44.086674  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:44.086808  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHKeyPath
	I1030 20:05:44.086884  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:44.086951  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetSSHUsername
	I1030 20:05:44.087011  453539 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:05:44.087053  453539 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/newest-cni-467894/id_rsa Username:docker}
	I1030 20:05:44.193644  453539 ssh_runner.go:195] Run: systemctl --version
	I1030 20:05:44.200029  453539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 20:05:44.363197  453539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 20:05:44.370063  453539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 20:05:44.370134  453539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 20:05:44.386215  453539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 20:05:44.386240  453539 start.go:495] detecting cgroup driver to use...
	I1030 20:05:44.386316  453539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 20:05:44.403378  453539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 20:05:44.417282  453539 docker.go:217] disabling cri-docker service (if available) ...
	I1030 20:05:44.417337  453539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 20:05:44.430334  453539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 20:05:44.443926  453539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 20:05:44.560878  453539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 20:05:44.706110  453539 docker.go:233] disabling docker service ...
	I1030 20:05:44.706186  453539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 20:05:44.723068  453539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 20:05:44.736144  453539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 20:05:44.877075  453539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 20:05:45.004860  453539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 20:05:45.019875  453539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 20:05:45.039353  453539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 20:05:45.039441  453539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:05:45.050543  453539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 20:05:45.050602  453539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:05:45.061610  453539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:05:45.072334  453539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:05:45.083199  453539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 20:05:45.094419  453539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:05:45.105368  453539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:05:45.124123  453539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 20:05:45.135934  453539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 20:05:45.145485  453539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 20:05:45.145542  453539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 20:05:45.160124  453539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 20:05:45.170745  453539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 20:05:45.301948  453539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 20:05:45.397017  453539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 20:05:45.397096  453539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 20:05:45.402390  453539 start.go:563] Will wait 60s for crictl version
	I1030 20:05:45.402460  453539 ssh_runner.go:195] Run: which crictl
	I1030 20:05:45.406885  453539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 20:05:45.444290  453539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 20:05:45.444361  453539 ssh_runner.go:195] Run: crio --version
	I1030 20:05:45.473332  453539 ssh_runner.go:195] Run: crio --version
	I1030 20:05:45.503254  453539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 20:05:45.504412  453539 main.go:141] libmachine: (newest-cni-467894) Calling .GetIP
	I1030 20:05:45.507262  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:45.507663  453539 main.go:141] libmachine: (newest-cni-467894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:de:75", ip: ""} in network mk-newest-cni-467894: {Iface:virbr2 ExpiryTime:2024-10-30 21:05:35 +0000 UTC Type:0 Mac:52:54:00:7b:de:75 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:newest-cni-467894 Clientid:01:52:54:00:7b:de:75}
	I1030 20:05:45.507686  453539 main.go:141] libmachine: (newest-cni-467894) DBG | domain newest-cni-467894 has defined IP address 192.168.50.214 and MAC address 52:54:00:7b:de:75 in network mk-newest-cni-467894
	I1030 20:05:45.507910  453539 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 20:05:45.512091  453539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 20:05:45.527836  453539 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.203509317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318750203476259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=121d51cd-0339-4753-b860-592afd2aaa4d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.204101653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87b8dc68-476d-45d5-b0e0-3945d0977f53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.204183012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87b8dc68-476d-45d5-b0e0-3945d0977f53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.204696654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87b8dc68-476d-45d5-b0e0-3945d0977f53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.246607910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2231b971-fb76-4d7e-b12b-7ed1acff07fb name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.246684628Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2231b971-fb76-4d7e-b12b-7ed1acff07fb name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.248173153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4de6a9e1-43ce-4312-b23f-8d62c07d8644 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.248646907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318750248620494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4de6a9e1-43ce-4312-b23f-8d62c07d8644 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.249185458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7aa2894e-9a68-4fba-bfc2-d83cb8dad476 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.249289823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7aa2894e-9a68-4fba-bfc2-d83cb8dad476 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.249508923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7aa2894e-9a68-4fba-bfc2-d83cb8dad476 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.293597474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf30aac9-aacb-43ad-97b4-b92ca66e1742 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.293720419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf30aac9-aacb-43ad-97b4-b92ca66e1742 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.294942535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81a5828d-a8dd-446e-815f-fe646ebdc315 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.295705247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318750295680244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81a5828d-a8dd-446e-815f-fe646ebdc315 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.296444378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5965187-6bc5-43d5-a457-d2068e6f8ad5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.296527061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5965187-6bc5-43d5-a457-d2068e6f8ad5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.296826762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5965187-6bc5-43d5-a457-d2068e6f8ad5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.339226431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=206b8b35-f189-453c-97dc-f7ad3e6e5f96 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.339367637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=206b8b35-f189-453c-97dc-f7ad3e6e5f96 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.340719470Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=178ba372-63bc-4ab4-ac7b-f198a334115b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.341098398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318750341078214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=178ba372-63bc-4ab4-ac7b-f198a334115b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.341826026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8e057eb-8045-43b9-a36f-c6a00ed0cb69 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.341903590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8e057eb-8045-43b9-a36f-c6a00ed0cb69 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:50 no-preload-960512 crio[721]: time="2024-10-30 20:05:50.342200872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730317644209067552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd,PodSandboxId:905e2e4bccb1efef5ab5b4a2e815bcb836bfb252394e291d861a65b3c9e1ebb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730317629088186270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6cdl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a35c00abc76a906329e047101fdcfc6322255f2522936436a04a61c5a437350,PodSandboxId:bc2686b87bfb034ff55f75849a047c20ecc67ca72fa6a9036d7aca8a9531b108,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730317626731561870,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: b4c64bf2-4452-4ab5-b98b-1dd7d09f7593,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9,PodSandboxId:11e7842c569eb6ff13ab2aca09a0dade99b55777b8d28f7dcfbb39a287cfce51,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730317613382715861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fxqqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58db3fab-21e3-41b7-99
f9-46ba3081db97,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef,PodSandboxId:240ad66de29e4da82a84da6ceac85db98899cf65d7bfcb0cac535b180954104e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730317613373109772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4637a77-26ab-4013-a705-08317c00dd
3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889,PodSandboxId:7b5242e0110ef727856ada68f90533bcf0d36efb3c69afdb31057081dbdea6c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730317609751228269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b115e77c417744c5175c6c827124174,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c,PodSandboxId:407dfa6f98c77d3b75e1dcb534d2096dd76c536a19b35603677f12723621d91b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730317609733080230,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1867e4cc33229323cfa5fd13d0f2a
3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67,PodSandboxId:e8f26a4bb41da19d71bd88ca95b9a4400748dfd5aef733683df233b2a8ba77c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730317609720004398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b3e1e18c6591f91c36d057e7108ea3,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4,PodSandboxId:7e4eee7aa27bc1d777ffc0cf5b853ef7c30c9bf1554f1a8b7cc75ed1bfa21bf8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730317609628548238,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-960512,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d62ca0f1f2e6061b066d46b0ce266c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8e057eb-8045-43b9-a36f-c6a00ed0cb69 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	822348d485756       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   240ad66de29e4       storage-provisioner
	1b9bfc1573170       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      18 minutes ago      Running             coredns                   1                   905e2e4bccb1e       coredns-7c65d6cfc9-6cdl4
	0a35c00abc76a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   bc2686b87bfb0       busybox
	0621c8e7bb77b       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      18 minutes ago      Running             kube-proxy                1                   11e7842c569eb       kube-proxy-fxqqc
	de9271f5ab996       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       2                   240ad66de29e4       storage-provisioner
	2873bfc8ed2a7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      19 minutes ago      Running             kube-scheduler            1                   7b5242e0110ef       kube-scheduler-no-preload-960512
	cf0541a4e5844       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      19 minutes ago      Running             kube-controller-manager   1                   407dfa6f98c77       kube-controller-manager-no-preload-960512
	ace7f40d51794       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   e8f26a4bb41da       etcd-no-preload-960512
	990c5503542eb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      19 minutes ago      Running             kube-apiserver            1                   7e4eee7aa27bc       kube-apiserver-no-preload-960512
	
	
	==> coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51944 - 62700 "HINFO IN 7475402381862816469.7922778664981946274. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009720795s
	
	
	==> describe nodes <==
	Name:               no-preload-960512
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-960512
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=no-preload-960512
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T19_37_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 19:36:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-960512
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 20:05:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 20:02:41 +0000   Wed, 30 Oct 2024 19:36:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 20:02:41 +0000   Wed, 30 Oct 2024 19:36:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 20:02:41 +0000   Wed, 30 Oct 2024 19:36:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 20:02:41 +0000   Wed, 30 Oct 2024 19:47:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.132
	  Hostname:    no-preload-960512
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe7534de72464b218fe452cd800b546e
	  System UUID:                fe7534de-7246-4b21-8fe4-52cd800b546e
	  Boot ID:                    d13e56f1-b6ef-459e-b3be-c1a3c1051072
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-6cdl4                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-960512                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-960512             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-960512    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-fxqqc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-960512             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-72bb5              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-960512 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-960512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-960512 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-960512 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-960512 event: Registered Node no-preload-960512 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-960512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-960512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-960512 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-960512 event: Registered Node no-preload-960512 in Controller
	
	
	==> dmesg <==
	[Oct30 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054950] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048523] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.163341] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.696109] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607546] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.883497] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.066386] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062674] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.177760] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.132878] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.288241] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[ +16.140969] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.061533] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.461584] systemd-fstab-generator[1445]: Ignoring "noauto" option for root device
	[  +4.562829] kauditd_printk_skb: 94 callbacks suppressed
	[  +4.437269] systemd-fstab-generator[2069]: Ignoring "noauto" option for root device
	[Oct30 19:47] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.807478] kauditd_printk_skb: 18 callbacks suppressed
	[ +17.476641] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] <==
	{"level":"info","ts":"2024-10-30T19:46:51.369466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 received MsgVoteResp from a7da7c7e26779cb7 at term 3"}
	{"level":"info","ts":"2024-10-30T19:46:51.369478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 became leader at term 3"}
	{"level":"info","ts":"2024-10-30T19:46:51.369516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a7da7c7e26779cb7 elected leader a7da7c7e26779cb7 at term 3"}
	{"level":"info","ts":"2024-10-30T19:46:51.381499Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a7da7c7e26779cb7","local-member-attributes":"{Name:no-preload-960512 ClientURLs:[https://192.168.72.132:2379]}","request-path":"/0/members/a7da7c7e26779cb7/attributes","cluster-id":"146bd9643c3d2907","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-30T19:46:51.381703Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:46:51.382205Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T19:46:51.383458Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:46:51.384685Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.132:2379"}
	{"level":"info","ts":"2024-10-30T19:46:51.385620Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-30T19:46:51.386850Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-30T19:46:51.386956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-30T19:46:51.387000Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-30T19:56:51.417802Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-10-30T19:56:51.429602Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":849,"took":"10.913951ms","hash":1616802537,"current-db-size-bytes":2805760,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2805760,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-10-30T19:56:51.429726Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1616802537,"revision":849,"compact-revision":-1}
	{"level":"info","ts":"2024-10-30T20:01:51.424568Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1091}
	{"level":"info","ts":"2024-10-30T20:01:51.428890Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1091,"took":"3.759744ms","hash":2305068846,"current-db-size-bytes":2805760,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-30T20:01:51.428981Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2305068846,"revision":1091,"compact-revision":849}
	{"level":"warn","ts":"2024-10-30T20:05:51.489934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"347.436199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:05:51.490171Z","caller":"traceutil/trace.go:171","msg":"trace[285321523] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1529; }","duration":"347.758815ms","start":"2024-10-30T20:05:51.142383Z","end":"2024-10-30T20:05:51.490142Z","steps":["trace[285321523] 'range keys from in-memory index tree'  (duration: 347.365924ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-30T20:05:51.490280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-30T20:05:51.142339Z","time spent":"347.879848ms","remote":"127.0.0.1:52910","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-30T20:05:51.619800Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.837663ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11292656076986359785 > lease_revoke:<id:1cb792def76d9f8c>","response":"size:28"}
	{"level":"info","ts":"2024-10-30T20:05:51.619913Z","caller":"traceutil/trace.go:171","msg":"trace[271156922] linearizableReadLoop","detail":"{readStateIndex:1801; appliedIndex:1800; }","duration":"128.136101ms","start":"2024-10-30T20:05:51.491750Z","end":"2024-10-30T20:05:51.619886Z","steps":["trace[271156922] 'read index received'  (duration: 21.676µs)","trace[271156922] 'applied index is now lower than readState.Index'  (duration: 128.113215ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-30T20:05:51.619976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.216066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-30T20:05:51.620001Z","caller":"traceutil/trace.go:171","msg":"trace[16494866] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1529; }","duration":"128.241864ms","start":"2024-10-30T20:05:51.491746Z","end":"2024-10-30T20:05:51.619988Z","steps":["trace[16494866] 'agreement among raft nodes before linearized reading'  (duration: 128.194029ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:05:52 up 19 min,  0 users,  load average: 0.88, 0.39, 0.25
	Linux no-preload-960512 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] <==
	W1030 20:01:53.840145       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:01:53.840206       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1030 20:01:53.841217       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:01:53.842416       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 20:02:53.841565       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:02:53.841946       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1030 20:02:53.842613       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:02:53.842694       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1030 20:02:53.843726       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:02:53.843760       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1030 20:04:53.844229       1 handler_proxy.go:99] no RequestInfo found in the context
	W1030 20:04:53.844330       1 handler_proxy.go:99] no RequestInfo found in the context
	E1030 20:04:53.844899       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1030 20:04:53.844896       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1030 20:04:53.846059       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1030 20:04:53.846144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] <==
	E1030 20:00:28.460466       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:00:28.971420       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:00:58.467968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:00:58.979044       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:01:28.474186       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:01:28.987230       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:01:58.480470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:01:58.995122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:02:28.487048       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:02:29.003501       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:02:41.984803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-960512"
	E1030 20:02:58.494017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:02:59.011470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1030 20:03:14.048325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="302.159µs"
	I1030 20:03:28.056624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="113.075µs"
	E1030 20:03:28.500418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:03:29.020199       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:03:58.506294       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:03:59.027120       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:04:28.513015       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:04:29.037427       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:04:58.520107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:04:59.045745       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1030 20:05:28.526116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1030 20:05:29.053554       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1030 19:46:53.633943       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1030 19:46:53.654945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.132"]
	E1030 19:46:53.655073       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1030 19:46:53.698928       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1030 19:46:53.699162       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 19:46:53.699320       1 server_linux.go:169] "Using iptables Proxier"
	I1030 19:46:53.704740       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1030 19:46:53.705370       1 server.go:483] "Version info" version="v1.31.2"
	I1030 19:46:53.705466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:46:53.710776       1 config.go:199] "Starting service config controller"
	I1030 19:46:53.712231       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1030 19:46:53.712650       1 config.go:105] "Starting endpoint slice config controller"
	I1030 19:46:53.714320       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1030 19:46:53.716750       1 config.go:328] "Starting node config controller"
	I1030 19:46:53.716802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1030 19:46:53.812714       1 shared_informer.go:320] Caches are synced for service config
	I1030 19:46:53.815063       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1030 19:46:53.817200       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] <==
	I1030 19:46:50.740498       1 serving.go:386] Generated self-signed cert in-memory
	W1030 19:46:52.807783       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1030 19:46:52.807825       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 19:46:52.807835       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1030 19:46:52.807842       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1030 19:46:52.848299       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1030 19:46:52.848346       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 19:46:52.853790       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1030 19:46:52.853829       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 19:46:52.858413       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1030 19:46:52.858482       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1030 19:46:52.961345       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 30 20:04:49 no-preload-960512 kubelet[1452]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 20:04:49 no-preload-960512 kubelet[1452]: E1030 20:04:49.303688    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318689303160848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:04:49 no-preload-960512 kubelet[1452]: E1030 20:04:49.303728    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318689303160848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:04:51 no-preload-960512 kubelet[1452]: E1030 20:04:51.032745    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 20:04:59 no-preload-960512 kubelet[1452]: E1030 20:04:59.305674    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318699305329255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:04:59 no-preload-960512 kubelet[1452]: E1030 20:04:59.305707    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318699305329255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:02 no-preload-960512 kubelet[1452]: E1030 20:05:02.033416    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 20:05:09 no-preload-960512 kubelet[1452]: E1030 20:05:09.307466    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318709307170166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:09 no-preload-960512 kubelet[1452]: E1030 20:05:09.307520    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318709307170166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:16 no-preload-960512 kubelet[1452]: E1030 20:05:16.032981    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 20:05:19 no-preload-960512 kubelet[1452]: E1030 20:05:19.311164    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318719308681916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:19 no-preload-960512 kubelet[1452]: E1030 20:05:19.311198    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318719308681916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:29 no-preload-960512 kubelet[1452]: E1030 20:05:29.314396    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318729313520728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:29 no-preload-960512 kubelet[1452]: E1030 20:05:29.314721    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318729313520728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:30 no-preload-960512 kubelet[1452]: E1030 20:05:30.033701    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 20:05:39 no-preload-960512 kubelet[1452]: E1030 20:05:39.316415    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318739316085011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:39 no-preload-960512 kubelet[1452]: E1030 20:05:39.316464    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318739316085011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:43 no-preload-960512 kubelet[1452]: E1030 20:05:43.035121    1452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-72bb5" podUID="7734d879-b974-42fd-9610-7e81ee6cbc13"
	Oct 30 20:05:49 no-preload-960512 kubelet[1452]: E1030 20:05:49.055099    1452 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 30 20:05:49 no-preload-960512 kubelet[1452]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 30 20:05:49 no-preload-960512 kubelet[1452]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 20:05:49 no-preload-960512 kubelet[1452]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 20:05:49 no-preload-960512 kubelet[1452]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 20:05:49 no-preload-960512 kubelet[1452]: E1030 20:05:49.318228    1452 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318749317814725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 30 20:05:49 no-preload-960512 kubelet[1452]: E1030 20:05:49.318309    1452 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318749317814725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] <==
	I1030 19:47:24.299554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 19:47:24.309844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 19:47:24.310013       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 19:47:41.710140       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 19:47:41.711163       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-960512_570a8d87-8418-49a7-89ff-429e5c4b3784!
	I1030 19:47:41.712008       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95c2a27c-9451-419b-a29d-15ba5e8662e0", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-960512_570a8d87-8418-49a7-89ff-429e5c4b3784 became leader
	I1030 19:47:41.812167       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-960512_570a8d87-8418-49a7-89ff-429e5c4b3784!
	
	
	==> storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] <==
	I1030 19:46:53.475752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1030 19:47:23.481594       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-960512 -n no-preload-960512
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-960512 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-72bb5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-960512 describe pod metrics-server-6867b74b74-72bb5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-960512 describe pod metrics-server-6867b74b74-72bb5: exit status 1 (74.565118ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-72bb5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-960512 describe pod metrics-server-6867b74b74-72bb5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (118.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:03:52.514608  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E1030 20:04:45.773045  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (243.244087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-516975" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-516975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-516975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.472µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-516975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (233.708776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-516975 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-516975 logs -n 25: (1.567631791s)
E1030 20:05:18.708790  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-534248 sudo cat                              | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo                                  | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo find                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-534248 sudo crio                             | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-534248                                       | bridge-534248                | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113740 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:36 UTC |
	|         | disable-driver-mounts-113740                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:36 UTC | 30 Oct 24 19:37 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-960512             | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC | 30 Oct 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768989  | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-042402            | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC | 30 Oct 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-516975        | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-960512                  | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768989       | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-960512                                   | no-preload-960512            | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-042402                 | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768989 | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:50 UTC |
	|         | default-k8s-diff-port-768989                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-042402                                  | embed-certs-042402           | jenkins | v1.34.0 | 30 Oct 24 19:40 UTC | 30 Oct 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-516975             | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC | 30 Oct 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-516975                              | old-k8s-version-516975       | jenkins | v1.34.0 | 30 Oct 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 19:42:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 19:42:11.799298  447486 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:42:11.799434  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799444  447486 out.go:358] Setting ErrFile to fd 2...
	I1030 19:42:11.799448  447486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:42:11.799628  447486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:42:11.800193  447486 out.go:352] Setting JSON to false
	I1030 19:42:11.801205  447486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12275,"bootTime":1730305057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:42:11.801318  447486 start.go:139] virtualization: kvm guest
	I1030 19:42:11.803677  447486 out.go:177] * [old-k8s-version-516975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:42:11.805274  447486 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:42:11.805300  447486 notify.go:220] Checking for updates...
	I1030 19:42:11.808043  447486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:42:11.809440  447486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:42:11.810604  447486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:42:11.811774  447486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:42:11.812958  447486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:42:11.814552  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:42:11.814994  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.815077  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.830315  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1030 19:42:11.830795  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.831345  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.831365  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.831692  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.831869  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.833718  447486 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 19:42:11.835019  447486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:42:11.835371  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:42:11.835416  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:42:11.850097  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1030 19:42:11.850532  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:42:11.850964  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:42:11.850978  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:42:11.851321  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:42:11.851541  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:42:11.886920  447486 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 19:42:11.888376  447486 start.go:297] selected driver: kvm2
	I1030 19:42:11.888392  447486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.888538  447486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:42:11.889472  447486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.889560  447486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 19:42:11.904007  447486 install.go:137] /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1030 19:42:11.904405  447486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:42:11.904443  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:42:11.904494  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:42:11.904549  447486 start.go:340] cluster config:
	{Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:42:11.904661  447486 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 19:42:11.907302  447486 out.go:177] * Starting "old-k8s-version-516975" primary control-plane node in "old-k8s-version-516975" cluster
	I1030 19:42:10.622770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:11.908430  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:42:11.908474  447486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 19:42:11.908485  447486 cache.go:56] Caching tarball of preloaded images
	I1030 19:42:11.908564  447486 preload.go:172] Found /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 19:42:11.908575  447486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1030 19:42:11.908666  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:42:11.908832  447486 start.go:360] acquireMachinesLock for old-k8s-version-516975: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:42:16.702732  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:19.774825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:25.854777  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:28.926846  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:35.006934  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:38.078752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:44.158848  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:47.230843  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:53.310763  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:42:56.382772  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:02.462818  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:05.534754  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:11.614801  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:14.686762  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:20.766767  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:23.838853  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:29.918782  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:32.990752  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:39.070771  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:42.142716  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:48.222814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:51.294775  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:43:57.374780  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:00.446825  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:06.526810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:09.598813  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:15.678770  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:18.750751  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:24.830814  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:27.902810  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:33.982759  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:37.054791  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:43.134706  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:46.206802  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:52.286830  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:44:55.358809  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:01.438753  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:04.510854  446736 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.132:22: connect: no route to host
	I1030 19:45:07.515699  446887 start.go:364] duration metric: took 4m29.000646378s to acquireMachinesLock for "default-k8s-diff-port-768989"
	I1030 19:45:07.515764  446887 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:07.515773  446887 fix.go:54] fixHost starting: 
	I1030 19:45:07.516191  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:07.516238  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:07.532374  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I1030 19:45:07.532907  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:07.533433  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:07.533459  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:07.533790  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:07.534016  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:07.534220  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:07.535802  446887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-768989: state=Stopped err=<nil>
	I1030 19:45:07.535842  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	W1030 19:45:07.536016  446887 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:07.537809  446887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-768989" ...
	I1030 19:45:07.539184  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Start
	I1030 19:45:07.539361  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring networks are active...
	I1030 19:45:07.540025  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network default is active
	I1030 19:45:07.540408  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Ensuring network mk-default-k8s-diff-port-768989 is active
	I1030 19:45:07.540867  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Getting domain xml...
	I1030 19:45:07.541489  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Creating domain...
	I1030 19:45:07.512810  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:07.512848  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513191  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:45:07.513223  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:45:07.513458  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:45:07.515538  446736 machine.go:96] duration metric: took 4m37.420773403s to provisionDockerMachine
	I1030 19:45:07.515594  446736 fix.go:56] duration metric: took 4m37.443968478s for fixHost
	I1030 19:45:07.515600  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 4m37.443992524s
	W1030 19:45:07.515625  446736 start.go:714] error starting host: provision: host is not running
	W1030 19:45:07.515753  446736 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1030 19:45:07.515763  446736 start.go:729] Will try again in 5 seconds ...
	I1030 19:45:08.756310  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting to get IP...
	I1030 19:45:08.757242  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757624  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.757747  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.757629  448092 retry.go:31] will retry after 202.103853ms: waiting for machine to come up
	I1030 19:45:08.961147  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961660  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:08.961685  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:08.961606  448092 retry.go:31] will retry after 243.456761ms: waiting for machine to come up
	I1030 19:45:09.207134  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207539  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.207582  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.207493  448092 retry.go:31] will retry after 375.017051ms: waiting for machine to come up
	I1030 19:45:09.584058  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584428  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:09.584462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:09.584373  448092 retry.go:31] will retry after 552.476692ms: waiting for machine to come up
	I1030 19:45:10.137989  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138421  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.138449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.138358  448092 retry.go:31] will retry after 560.865483ms: waiting for machine to come up
	I1030 19:45:10.700603  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700968  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:10.700996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:10.700920  448092 retry.go:31] will retry after 680.400693ms: waiting for machine to come up
	I1030 19:45:11.382861  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383336  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:11.383362  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:11.383274  448092 retry.go:31] will retry after 787.136113ms: waiting for machine to come up
	I1030 19:45:12.171550  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171910  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:12.171938  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:12.171853  448092 retry.go:31] will retry after 1.176474969s: waiting for machine to come up
	I1030 19:45:13.349617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350080  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:13.350114  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:13.350042  448092 retry.go:31] will retry after 1.211573437s: waiting for machine to come up
	I1030 19:45:12.517265  446736 start.go:360] acquireMachinesLock for no-preload-960512: {Name:mk35e25f53fa8cfadb39ca0ecdccfc2b3fbe845b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 19:45:14.563397  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:14.563805  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:14.563749  448092 retry.go:31] will retry after 1.625938777s: waiting for machine to come up
	I1030 19:45:16.191798  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192226  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:16.192255  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:16.192188  448092 retry.go:31] will retry after 2.442949682s: waiting for machine to come up
	I1030 19:45:18.636342  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636768  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:18.636812  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:18.636748  448092 retry.go:31] will retry after 2.48415211s: waiting for machine to come up
	I1030 19:45:21.124407  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124892  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | unable to find current IP address of domain default-k8s-diff-port-768989 in network mk-default-k8s-diff-port-768989
	I1030 19:45:21.124919  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | I1030 19:45:21.124843  448092 retry.go:31] will retry after 3.392637796s: waiting for machine to come up
	I1030 19:45:25.815539  446965 start.go:364] duration metric: took 4m42.694254153s to acquireMachinesLock for "embed-certs-042402"
	I1030 19:45:25.815623  446965 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:25.815635  446965 fix.go:54] fixHost starting: 
	I1030 19:45:25.816068  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:25.816232  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:25.833218  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1030 19:45:25.833610  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:25.834159  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:45:25.834191  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:25.834567  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:25.834777  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:25.834920  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:45:25.836507  446965 fix.go:112] recreateIfNeeded on embed-certs-042402: state=Stopped err=<nil>
	I1030 19:45:25.836532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	W1030 19:45:25.836711  446965 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:25.839078  446965 out.go:177] * Restarting existing kvm2 VM for "embed-certs-042402" ...
	I1030 19:45:24.519725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520072  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Found IP for machine: 192.168.39.92
	I1030 19:45:24.520091  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserving static IP address...
	I1030 19:45:24.520113  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has current primary IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.520507  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.520521  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Reserved static IP address: 192.168.39.92
	I1030 19:45:24.520535  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | skip adding static IP to network mk-default-k8s-diff-port-768989 - found existing host DHCP lease matching {name: "default-k8s-diff-port-768989", mac: "52:54:00:98:b1:55", ip: "192.168.39.92"}
	I1030 19:45:24.520545  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Waiting for SSH to be available...
	I1030 19:45:24.520560  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Getting to WaitForSSH function...
	I1030 19:45:24.522776  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523095  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.523127  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.523209  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH client type: external
	I1030 19:45:24.523229  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa (-rw-------)
	I1030 19:45:24.523262  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:24.523283  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | About to run SSH command:
	I1030 19:45:24.523298  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | exit 0
	I1030 19:45:24.646297  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:24.646826  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetConfigRaw
	I1030 19:45:24.647589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:24.650093  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650532  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.650564  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.650790  446887 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/config.json ...
	I1030 19:45:24.650984  446887 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:24.651005  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:24.651232  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.653396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653751  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.653781  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.653889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.654084  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654263  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.654449  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.654677  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.654922  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.654935  446887 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:24.762586  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:24.762621  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.762898  446887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-768989"
	I1030 19:45:24.762936  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:24.763250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.765937  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766265  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.766289  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.766398  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.766599  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766762  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.766920  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.767087  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.767257  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.767269  446887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-768989 && echo "default-k8s-diff-port-768989" | sudo tee /etc/hostname
	I1030 19:45:24.888742  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-768989
	
	I1030 19:45:24.888771  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:24.891326  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891638  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:24.891691  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:24.891804  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:24.892018  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892154  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:24.892281  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:24.892498  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:24.892692  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:24.892716  446887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-768989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-768989/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-768989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:25.012173  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:25.012214  446887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:25.012240  446887 buildroot.go:174] setting up certificates
	I1030 19:45:25.012250  446887 provision.go:84] configureAuth start
	I1030 19:45:25.012280  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetMachineName
	I1030 19:45:25.012598  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.015106  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015430  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.015458  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.015629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.017810  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018099  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.018136  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.018230  446887 provision.go:143] copyHostCerts
	I1030 19:45:25.018322  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:25.018334  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:25.018401  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:25.018553  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:25.018566  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:25.018634  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:25.018716  446887 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:25.018724  446887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:25.018748  446887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:25.018798  446887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-768989 san=[127.0.0.1 192.168.39.92 default-k8s-diff-port-768989 localhost minikube]
	I1030 19:45:25.188186  446887 provision.go:177] copyRemoteCerts
	I1030 19:45:25.188246  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:25.188285  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.190995  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.191344  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.191525  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.191718  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.191875  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.191991  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.277273  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1030 19:45:25.300302  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:45:25.322919  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:25.347214  446887 provision.go:87] duration metric: took 334.947897ms to configureAuth
	I1030 19:45:25.347246  446887 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:25.347432  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:25.347510  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.349988  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350294  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.350324  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.350500  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.350704  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.350836  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.351015  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.351210  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.351421  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.351436  446887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:25.576481  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:25.576509  446887 machine.go:96] duration metric: took 925.509257ms to provisionDockerMachine
	I1030 19:45:25.576525  446887 start.go:293] postStartSetup for "default-k8s-diff-port-768989" (driver="kvm2")
	I1030 19:45:25.576562  446887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:25.576589  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.576923  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:25.576951  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.579498  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579825  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.579841  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.579980  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.580151  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.580320  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.580453  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.665032  446887 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:25.669402  446887 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:25.669430  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:25.669500  446887 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:25.669573  446887 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:25.669665  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:25.679070  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:25.703131  446887 start.go:296] duration metric: took 126.586543ms for postStartSetup
	I1030 19:45:25.703194  446887 fix.go:56] duration metric: took 18.187420989s for fixHost
	I1030 19:45:25.703217  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.705911  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706365  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.706396  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.706609  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.706800  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.706944  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.707052  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.707188  446887 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:25.707428  446887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1030 19:45:25.707443  446887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:25.815370  446887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317525.786848764
	
	I1030 19:45:25.815406  446887 fix.go:216] guest clock: 1730317525.786848764
	I1030 19:45:25.815414  446887 fix.go:229] Guest: 2024-10-30 19:45:25.786848764 +0000 UTC Remote: 2024-10-30 19:45:25.703198163 +0000 UTC m=+287.327380555 (delta=83.650601ms)
	I1030 19:45:25.815439  446887 fix.go:200] guest clock delta is within tolerance: 83.650601ms
	I1030 19:45:25.815445  446887 start.go:83] releasing machines lock for "default-k8s-diff-port-768989", held for 18.299702226s
	I1030 19:45:25.815467  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.815737  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:25.818508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818851  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.818889  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.818987  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819477  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819671  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:25.819808  446887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:25.819862  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.819900  446887 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:25.819930  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:25.822372  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822725  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.822754  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822774  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.822887  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823109  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:25.823168  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:25.823330  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823429  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:25.823506  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.823605  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:25.823758  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:25.823880  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:25.903488  446887 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:25.931046  446887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:26.077178  446887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:26.084282  446887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:26.084358  446887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:26.100869  446887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:26.100893  446887 start.go:495] detecting cgroup driver to use...
	I1030 19:45:26.100984  446887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:26.117006  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:26.130102  446887 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:26.130184  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:26.148540  446887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:26.163003  446887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:26.286433  446887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:26.444862  446887 docker.go:233] disabling docker service ...
	I1030 19:45:26.444931  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:26.460606  446887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:26.477159  446887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:26.600212  446887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:26.725587  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:26.741934  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:26.761815  446887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:26.761872  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.772368  446887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:26.772422  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.784279  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.795403  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.806323  446887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:26.821929  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.836574  446887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.857305  446887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:26.868135  446887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:26.878058  446887 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:26.878138  446887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:26.891979  446887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:26.902181  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:27.021858  446887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:27.118890  446887 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:27.118985  446887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:27.125407  446887 start.go:563] Will wait 60s for crictl version
	I1030 19:45:27.125472  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:45:27.129507  446887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:27.176630  446887 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:27.176739  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.205818  446887 ssh_runner.go:195] Run: crio --version
	I1030 19:45:27.236431  446887 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:25.840689  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Start
	I1030 19:45:25.840860  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring networks are active...
	I1030 19:45:25.841604  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network default is active
	I1030 19:45:25.841928  446965 main.go:141] libmachine: (embed-certs-042402) Ensuring network mk-embed-certs-042402 is active
	I1030 19:45:25.842443  446965 main.go:141] libmachine: (embed-certs-042402) Getting domain xml...
	I1030 19:45:25.843267  446965 main.go:141] libmachine: (embed-certs-042402) Creating domain...
	I1030 19:45:27.094878  446965 main.go:141] libmachine: (embed-certs-042402) Waiting to get IP...
	I1030 19:45:27.095705  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.096101  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.096166  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.096079  448226 retry.go:31] will retry after 190.217394ms: waiting for machine to come up
	I1030 19:45:27.287473  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.287940  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.287966  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.287899  448226 retry.go:31] will retry after 365.943545ms: waiting for machine to come up
	I1030 19:45:27.655952  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:27.656374  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:27.656425  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:27.656343  448226 retry.go:31] will retry after 345.369581ms: waiting for machine to come up
	I1030 19:45:28.003856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.004367  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.004398  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.004319  448226 retry.go:31] will retry after 609.6218ms: waiting for machine to come up
	I1030 19:45:27.237629  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetIP
	I1030 19:45:27.240387  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240733  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:27.240779  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:27.240995  446887 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:27.245263  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:27.261305  446887 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:27.261440  446887 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:27.261489  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:27.301593  446887 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:27.301650  446887 ssh_runner.go:195] Run: which lz4
	I1030 19:45:27.305829  446887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:27.310384  446887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:27.310413  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:28.615219  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:28.615769  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:28.615795  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:28.615716  448226 retry.go:31] will retry after 672.090411ms: waiting for machine to come up
	I1030 19:45:29.289646  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:29.290179  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:29.290216  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:29.290105  448226 retry.go:31] will retry after 865.239242ms: waiting for machine to come up
	I1030 19:45:30.157223  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.157650  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.157679  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.157616  448226 retry.go:31] will retry after 833.557181ms: waiting for machine to come up
	I1030 19:45:30.993139  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:30.993663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:30.993720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:30.993625  448226 retry.go:31] will retry after 989.333841ms: waiting for machine to come up
	I1030 19:45:31.983978  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:31.984498  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:31.984546  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:31.984443  448226 retry.go:31] will retry after 1.534311856s: waiting for machine to come up
	I1030 19:45:28.730765  446887 crio.go:462] duration metric: took 1.424975563s to copy over tarball
	I1030 19:45:28.730868  446887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:30.907494  446887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1765829s)
	I1030 19:45:30.907536  446887 crio.go:469] duration metric: took 2.176738354s to extract the tarball
	I1030 19:45:30.907546  446887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:30.944242  446887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:30.986812  446887 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:30.986839  446887 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:30.986872  446887 kubeadm.go:934] updating node { 192.168.39.92 8444 v1.31.2 crio true true} ...
	I1030 19:45:30.987042  446887 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-768989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:30.987145  446887 ssh_runner.go:195] Run: crio config
	I1030 19:45:31.037466  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:31.037496  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:31.037511  446887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:31.037544  446887 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-768989 NodeName:default-k8s-diff-port-768989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:31.037735  446887 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-768989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:31.037815  446887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:31.047808  446887 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:31.047885  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:31.057074  446887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1030 19:45:31.073022  446887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:31.088919  446887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1030 19:45:31.105357  446887 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:31.109207  446887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:31.121329  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:31.234078  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:31.251028  446887 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989 for IP: 192.168.39.92
	I1030 19:45:31.251057  446887 certs.go:194] generating shared ca certs ...
	I1030 19:45:31.251080  446887 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:31.251287  446887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:31.251342  446887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:31.251354  446887 certs.go:256] generating profile certs ...
	I1030 19:45:31.251480  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/client.key
	I1030 19:45:31.251567  446887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key.eeeafde8
	I1030 19:45:31.251620  446887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key
	I1030 19:45:31.251788  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:31.251834  446887 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:31.251848  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:31.251888  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:31.251931  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:31.251963  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:31.252024  446887 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:31.253127  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:31.293822  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:31.334804  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:31.366955  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:31.396042  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1030 19:45:31.428748  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1030 19:45:31.452866  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:31.476407  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/default-k8s-diff-port-768989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:45:31.500375  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:31.523909  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:31.547532  446887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:31.571163  446887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:31.587969  446887 ssh_runner.go:195] Run: openssl version
	I1030 19:45:31.593866  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:31.604538  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609348  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.609419  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:31.615446  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:31.626640  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:31.640948  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646702  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.646751  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:31.654365  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:31.668538  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:31.679201  446887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683631  446887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.683693  446887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:31.689362  446887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:31.699804  446887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:31.704445  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:31.710558  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:31.718563  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:31.724745  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:31.731125  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:31.736828  446887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:31.742434  446887 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-768989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-768989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:31.742604  446887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:31.742654  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.779319  446887 cri.go:89] found id: ""
	I1030 19:45:31.779416  446887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:31.789556  446887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:31.789576  446887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:31.789622  446887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:31.799817  446887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:31.800824  446887 kubeconfig.go:125] found "default-k8s-diff-port-768989" server: "https://192.168.39.92:8444"
	I1030 19:45:31.803207  446887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:31.812876  446887 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I1030 19:45:31.812909  446887 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:31.812924  446887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:31.812984  446887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:31.858070  446887 cri.go:89] found id: ""
	I1030 19:45:31.858174  446887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:31.874923  446887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:31.885243  446887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:31.885275  446887 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:31.885321  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1030 19:45:31.894394  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:31.894453  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:31.903760  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1030 19:45:31.912344  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:31.912410  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:31.921458  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.930426  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:31.930499  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:31.940008  446887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1030 19:45:31.949578  446887 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:31.949645  446887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:31.959022  446887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:31.968457  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.069017  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:32.985574  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.191887  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.273266  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:33.400584  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:33.400686  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:33.520596  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:33.521020  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:33.521041  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:33.520992  448226 retry.go:31] will retry after 1.787777673s: waiting for machine to come up
	I1030 19:45:35.310399  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:35.310878  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:35.310906  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:35.310833  448226 retry.go:31] will retry after 2.264310439s: waiting for machine to come up
	I1030 19:45:37.577787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:37.578276  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:37.578310  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:37.578214  448226 retry.go:31] will retry after 2.384410161s: waiting for machine to come up
	I1030 19:45:33.901397  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.400978  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:34.901476  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.401772  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:35.420824  446887 api_server.go:72] duration metric: took 2.020238714s to wait for apiserver process to appear ...
	I1030 19:45:35.420862  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:35.420889  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.795897  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.795931  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.795948  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.848032  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:37.848069  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:37.921286  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:37.930778  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:37.930822  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.421866  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.429247  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.429291  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:38.921655  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:38.928650  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:38.928680  446887 api_server.go:103] status: https://192.168.39.92:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:39.421195  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:45:39.425565  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:45:39.433509  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:39.433543  446887 api_server.go:131] duration metric: took 4.01267362s to wait for apiserver health ...
	I1030 19:45:39.433555  446887 cni.go:84] Creating CNI manager for ""
	I1030 19:45:39.433564  446887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:39.435645  446887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:39.437042  446887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:39.456091  446887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:39.477617  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:39.485998  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:39.486041  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:39.486051  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:39.486061  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:39.486071  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:39.486082  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:45:39.486087  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:39.486092  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:39.486095  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:45:39.486101  446887 system_pods.go:74] duration metric: took 8.467537ms to wait for pod list to return data ...
	I1030 19:45:39.486110  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:39.490771  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:39.490793  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:39.490805  446887 node_conditions.go:105] duration metric: took 4.690594ms to run NodePressure ...
	I1030 19:45:39.490821  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:39.752369  446887 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757080  446887 kubeadm.go:739] kubelet initialised
	I1030 19:45:39.757105  446887 kubeadm.go:740] duration metric: took 4.707251ms waiting for restarted kubelet to initialise ...
	I1030 19:45:39.757114  446887 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:39.762374  446887 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.766904  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766934  446887 pod_ready.go:82] duration metric: took 4.529466ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.766948  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.766958  446887 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.771681  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771705  446887 pod_ready.go:82] duration metric: took 4.73772ms for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.771715  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.771722  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.776170  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776199  446887 pod_ready.go:82] duration metric: took 4.470353ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.776211  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.776220  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:39.881949  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.881988  446887 pod_ready.go:82] duration metric: took 105.756203ms for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:39.882027  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:39.882042  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.281665  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281703  446887 pod_ready.go:82] duration metric: took 399.651747ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.281716  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-proxy-tsr5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.281725  446887 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:40.680827  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680861  446887 pod_ready.go:82] duration metric: took 399.128654ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:40.680873  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:40.680883  446887 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:41.086176  446887 pod_ready.go:98] node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086203  446887 pod_ready.go:82] duration metric: took 405.311117ms for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:45:41.086216  446887 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-768989" hosting pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:41.086225  446887 pod_ready.go:39] duration metric: took 1.32910228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:41.086246  446887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:45:41.100836  446887 ops.go:34] apiserver oom_adj: -16
	I1030 19:45:41.100871  446887 kubeadm.go:597] duration metric: took 9.31128777s to restartPrimaryControlPlane
	I1030 19:45:41.100887  446887 kubeadm.go:394] duration metric: took 9.358460424s to StartCluster
	I1030 19:45:41.100915  446887 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.101046  446887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:45:41.103578  446887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:41.103910  446887 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:45:41.103995  446887 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:45:41.104111  446887 config.go:182] Loaded profile config "default-k8s-diff-port-768989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:41.104131  446887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104151  446887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104159  446887 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:45:41.104175  446887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104198  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104207  446887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.104218  446887 addons.go:243] addon metrics-server should already be in state true
	I1030 19:45:41.104153  446887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-768989"
	I1030 19:45:41.104255  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.104258  446887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-768989"
	I1030 19:45:41.104672  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104683  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104694  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.104718  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104728  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.104730  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.105606  446887 out.go:177] * Verifying Kubernetes components...
	I1030 19:45:41.107136  446887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:41.121415  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I1030 19:45:41.122053  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.122694  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.122721  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.123073  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.123682  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.123733  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.125497  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1030 19:45:41.125546  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I1030 19:45:41.125878  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.125962  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.126425  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126445  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126465  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.126507  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.126840  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.126897  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.127362  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.127392  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.127590  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.131397  446887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-768989"
	W1030 19:45:41.131424  446887 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:45:41.131457  446887 host.go:66] Checking if "default-k8s-diff-port-768989" exists ...
	I1030 19:45:41.131834  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.131877  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.143183  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1030 19:45:41.143221  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I1030 19:45:41.143628  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.143765  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.144231  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144249  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144369  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.144392  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.144657  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144766  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.144879  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.144926  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.146739  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.146913  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.148740  446887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:45:41.148794  446887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:45:41.149853  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1030 19:45:41.150250  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.150397  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:45:41.150435  446887 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:45:41.150462  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150525  446887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.150545  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:45:41.150562  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.150763  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.150781  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.151168  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.152135  446887 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:41.152184  446887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:41.154133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154425  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154625  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.154654  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.154811  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.154996  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155033  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.155059  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.155145  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.155310  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.155345  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.155464  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.155548  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.168971  446887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1030 19:45:41.169445  446887 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:41.169946  446887 main.go:141] libmachine: Using API Version  1
	I1030 19:45:41.169969  446887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:41.170335  446887 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:41.170508  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetState
	I1030 19:45:41.172162  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .DriverName
	I1030 19:45:41.172378  446887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.172394  446887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:45:41.172410  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHHostname
	I1030 19:45:41.175214  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175617  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:b1:55", ip: ""} in network mk-default-k8s-diff-port-768989: {Iface:virbr1 ExpiryTime:2024-10-30 20:45:18 +0000 UTC Type:0 Mac:52:54:00:98:b1:55 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:default-k8s-diff-port-768989 Clientid:01:52:54:00:98:b1:55}
	I1030 19:45:41.175643  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | domain default-k8s-diff-port-768989 has defined IP address 192.168.39.92 and MAC address 52:54:00:98:b1:55 in network mk-default-k8s-diff-port-768989
	I1030 19:45:41.175795  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHPort
	I1030 19:45:41.175978  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHKeyPath
	I1030 19:45:41.176133  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .GetSSHUsername
	I1030 19:45:41.176301  446887 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/default-k8s-diff-port-768989/id_rsa Username:docker}
	I1030 19:45:41.324093  446887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:41.381986  446887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:41.439497  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:45:41.439522  446887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:45:41.448751  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:45:41.486707  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:45:41.486736  446887 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:45:41.514478  446887 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.514513  446887 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:45:41.546821  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:45:41.590509  446887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:45:41.879189  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879224  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879548  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:41.879597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879608  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.879622  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.879632  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.879868  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.879886  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:41.889008  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:41.889024  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:41.889273  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:41.889290  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499223  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499250  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499597  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499621  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499632  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.499689  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.499969  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.499984  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.499996  446887 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-768989"
	I1030 19:45:42.598713  446887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008157275s)
	I1030 19:45:42.598770  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.598782  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599088  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599109  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.599117  446887 main.go:141] libmachine: Making call to close driver server
	I1030 19:45:42.599143  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) DBG | Closing plugin on server side
	I1030 19:45:42.599201  446887 main.go:141] libmachine: (default-k8s-diff-port-768989) Calling .Close
	I1030 19:45:42.599447  446887 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:45:42.599461  446887 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:45:42.601840  446887 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1030 19:45:39.963885  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:39.964308  446965 main.go:141] libmachine: (embed-certs-042402) DBG | unable to find current IP address of domain embed-certs-042402 in network mk-embed-certs-042402
	I1030 19:45:39.964346  446965 main.go:141] libmachine: (embed-certs-042402) DBG | I1030 19:45:39.964250  448226 retry.go:31] will retry after 4.32150593s: waiting for machine to come up
	I1030 19:45:42.603197  446887 addons.go:510] duration metric: took 1.499214294s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I1030 19:45:43.386074  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:45.631177  447486 start.go:364] duration metric: took 3m33.722307877s to acquireMachinesLock for "old-k8s-version-516975"
	I1030 19:45:45.631272  447486 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:45:45.631284  447486 fix.go:54] fixHost starting: 
	I1030 19:45:45.631708  447486 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:45:45.631767  447486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:45:45.648654  447486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1030 19:45:45.649098  447486 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:45:45.649552  447486 main.go:141] libmachine: Using API Version  1
	I1030 19:45:45.649574  447486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:45:45.649848  447486 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:45:45.650005  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:45:45.650153  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetState
	I1030 19:45:45.651624  447486 fix.go:112] recreateIfNeeded on old-k8s-version-516975: state=Stopped err=<nil>
	I1030 19:45:45.651661  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	W1030 19:45:45.651805  447486 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:45:45.654065  447486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-516975" ...
	I1030 19:45:45.655382  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .Start
	I1030 19:45:45.655554  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring networks are active...
	I1030 19:45:45.656134  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network default is active
	I1030 19:45:45.656518  447486 main.go:141] libmachine: (old-k8s-version-516975) Ensuring network mk-old-k8s-version-516975 is active
	I1030 19:45:45.656885  447486 main.go:141] libmachine: (old-k8s-version-516975) Getting domain xml...
	I1030 19:45:45.657501  447486 main.go:141] libmachine: (old-k8s-version-516975) Creating domain...
	I1030 19:45:44.289530  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289944  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has current primary IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.289965  446965 main.go:141] libmachine: (embed-certs-042402) Found IP for machine: 192.168.61.235
	I1030 19:45:44.289978  446965 main.go:141] libmachine: (embed-certs-042402) Reserving static IP address...
	I1030 19:45:44.290419  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.290450  446965 main.go:141] libmachine: (embed-certs-042402) Reserved static IP address: 192.168.61.235
	I1030 19:45:44.290469  446965 main.go:141] libmachine: (embed-certs-042402) DBG | skip adding static IP to network mk-embed-certs-042402 - found existing host DHCP lease matching {name: "embed-certs-042402", mac: "52:54:00:61:aa:58", ip: "192.168.61.235"}
	I1030 19:45:44.290502  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Getting to WaitForSSH function...
	I1030 19:45:44.290519  446965 main.go:141] libmachine: (embed-certs-042402) Waiting for SSH to be available...
	I1030 19:45:44.292418  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292684  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.292727  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.292750  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH client type: external
	I1030 19:45:44.292785  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa (-rw-------)
	I1030 19:45:44.292839  446965 main.go:141] libmachine: (embed-certs-042402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:45:44.292856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | About to run SSH command:
	I1030 19:45:44.292873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | exit 0
	I1030 19:45:44.414810  446965 main.go:141] libmachine: (embed-certs-042402) DBG | SSH cmd err, output: <nil>: 
	I1030 19:45:44.415211  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetConfigRaw
	I1030 19:45:44.416039  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.418830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419269  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.419303  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.419529  446965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/config.json ...
	I1030 19:45:44.419832  446965 machine.go:93] provisionDockerMachine start ...
	I1030 19:45:44.419859  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:44.420102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.422359  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422704  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.422729  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.422878  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.423072  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423217  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.423355  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.423493  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.423677  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.423685  446965 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:45:44.527214  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:45:44.527248  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527526  446965 buildroot.go:166] provisioning hostname "embed-certs-042402"
	I1030 19:45:44.527562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.527793  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.530474  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.530830  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.530856  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.531041  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.531243  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531432  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.531563  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.531736  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.531958  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.531979  446965 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-042402 && echo "embed-certs-042402" | sudo tee /etc/hostname
	I1030 19:45:44.656963  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-042402
	
	I1030 19:45:44.656996  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.659958  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660361  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.660397  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.660643  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:44.660842  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:44.661122  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:44.661295  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:44.661469  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:44.661484  446965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-042402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-042402/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-042402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:45:44.771688  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:45:44.771728  446965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:45:44.771755  446965 buildroot.go:174] setting up certificates
	I1030 19:45:44.771766  446965 provision.go:84] configureAuth start
	I1030 19:45:44.771780  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetMachineName
	I1030 19:45:44.772120  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:44.774838  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775271  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.775298  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.775424  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:44.777432  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777765  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:44.777793  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:44.777910  446965 provision.go:143] copyHostCerts
	I1030 19:45:44.777990  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:45:44.778006  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:45:44.778057  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:45:44.778147  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:45:44.778155  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:45:44.778174  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:45:44.778229  446965 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:45:44.778237  446965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:45:44.778253  446965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:45:44.778360  446965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.embed-certs-042402 san=[127.0.0.1 192.168.61.235 embed-certs-042402 localhost minikube]
	I1030 19:45:45.019172  446965 provision.go:177] copyRemoteCerts
	I1030 19:45:45.019234  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:45:45.019265  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.022052  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022402  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.022435  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.022590  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.022788  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.022969  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.023123  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.104733  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:45:45.128256  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:45:45.150758  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:45:45.173233  446965 provision.go:87] duration metric: took 401.450922ms to configureAuth
	I1030 19:45:45.173268  446965 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:45:45.173465  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:45:45.173562  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.176259  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176663  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.176698  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.176826  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.177025  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177190  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.177364  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.177554  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.177724  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.177737  446965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:45:45.396562  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:45:45.396593  446965 machine.go:96] duration metric: took 976.740759ms to provisionDockerMachine
	I1030 19:45:45.396606  446965 start.go:293] postStartSetup for "embed-certs-042402" (driver="kvm2")
	I1030 19:45:45.396616  446965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:45:45.396644  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.397007  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:45:45.397048  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.399581  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.399930  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.399955  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.400045  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.400219  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.400373  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.400483  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.481722  446965 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:45:45.487207  446965 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:45:45.487231  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:45:45.487304  446965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:45:45.487398  446965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:45:45.487516  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:45:45.500340  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:45.524930  446965 start.go:296] duration metric: took 128.310254ms for postStartSetup
	I1030 19:45:45.524972  446965 fix.go:56] duration metric: took 19.709339085s for fixHost
	I1030 19:45:45.524993  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.527426  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527751  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.527775  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.527931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.528145  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528326  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.528450  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.528591  446965 main.go:141] libmachine: Using SSH client type: native
	I1030 19:45:45.528804  446965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I1030 19:45:45.528815  446965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:45:45.630961  446965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317545.604586107
	
	I1030 19:45:45.630997  446965 fix.go:216] guest clock: 1730317545.604586107
	I1030 19:45:45.631020  446965 fix.go:229] Guest: 2024-10-30 19:45:45.604586107 +0000 UTC Remote: 2024-10-30 19:45:45.524975841 +0000 UTC m=+302.540999350 (delta=79.610266ms)
	I1030 19:45:45.631054  446965 fix.go:200] guest clock delta is within tolerance: 79.610266ms
	I1030 19:45:45.631062  446965 start.go:83] releasing machines lock for "embed-certs-042402", held for 19.81546348s
	I1030 19:45:45.631109  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.631396  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:45.634114  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634524  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.634558  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.634739  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635353  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635532  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:45:45.635646  446965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:45:45.635692  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.635746  446965 ssh_runner.go:195] Run: cat /version.json
	I1030 19:45:45.635775  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:45:45.638260  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638639  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.638694  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638718  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.638931  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639108  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:45.639128  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:45.639160  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639260  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:45:45.639371  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639440  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:45:45.639509  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.639581  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:45:45.639723  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:45:45.747515  446965 ssh_runner.go:195] Run: systemctl --version
	I1030 19:45:45.754851  446965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:45:45.904471  446965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:45:45.911348  446965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:45:45.911428  446965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:45:45.928273  446965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:45:45.928299  446965 start.go:495] detecting cgroup driver to use...
	I1030 19:45:45.928381  446965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:45:45.949100  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:45:45.963284  446965 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:45:45.963362  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:45:45.976952  446965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:45:45.991367  446965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:45:46.104670  446965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:45:46.254049  446965 docker.go:233] disabling docker service ...
	I1030 19:45:46.254130  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:45:46.273226  446965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:45:46.290211  446965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:45:46.491658  446965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:45:46.637447  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:45:46.654517  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:45:46.679786  446965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:45:46.679879  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.695487  446965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:45:46.695570  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.708974  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.724847  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.736912  446965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:45:46.749015  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.761190  446965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.780198  446965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:45:46.790865  446965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:45:46.800950  446965 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:45:46.801029  446965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:45:46.814792  446965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:45:46.825490  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:46.952367  446965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:45:47.054874  446965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:45:47.054962  446965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:45:47.061036  446965 start.go:563] Will wait 60s for crictl version
	I1030 19:45:47.061105  446965 ssh_runner.go:195] Run: which crictl
	I1030 19:45:47.064917  446965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:45:47.101690  446965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:45:47.101796  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.131286  446965 ssh_runner.go:195] Run: crio --version
	I1030 19:45:47.166314  446965 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:45:47.167861  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetIP
	I1030 19:45:47.171097  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171438  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:45:47.171466  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:45:47.171737  446965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1030 19:45:47.177796  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:47.191930  446965 kubeadm.go:883] updating cluster {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:45:47.192090  446965 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:45:47.192149  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:47.231586  446965 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:45:47.231672  446965 ssh_runner.go:195] Run: which lz4
	I1030 19:45:47.236190  446965 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:45:47.240803  446965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:45:47.240888  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1030 19:45:45.386683  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:47.386771  446887 node_ready.go:53] node "default-k8s-diff-port-768989" has status "Ready":"False"
	I1030 19:45:48.387313  446887 node_ready.go:49] node "default-k8s-diff-port-768989" has status "Ready":"True"
	I1030 19:45:48.387344  446887 node_ready.go:38] duration metric: took 7.005318984s for node "default-k8s-diff-port-768989" to be "Ready" ...
	I1030 19:45:48.387359  446887 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:48.395198  446887 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401276  446887 pod_ready.go:93] pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:48.401306  446887 pod_ready.go:82] duration metric: took 6.071305ms for pod "coredns-7c65d6cfc9-9w8m8" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:48.401321  446887 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:47.003397  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting to get IP...
	I1030 19:45:47.004281  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.004710  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.004787  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.004695  448432 retry.go:31] will retry after 234.659459ms: waiting for machine to come up
	I1030 19:45:47.241308  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.241838  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.241863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.241802  448432 retry.go:31] will retry after 350.804975ms: waiting for machine to come up
	I1030 19:45:47.594533  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:47.595106  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:47.595139  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:47.595044  448432 retry.go:31] will retry after 448.637889ms: waiting for machine to come up
	I1030 19:45:48.045858  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.046358  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.046386  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.046315  448432 retry.go:31] will retry after 543.947609ms: waiting for machine to come up
	I1030 19:45:48.592474  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:48.592908  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:48.592937  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:48.592875  448432 retry.go:31] will retry after 744.106735ms: waiting for machine to come up
	I1030 19:45:49.338345  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:49.338833  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:49.338857  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:49.338795  448432 retry.go:31] will retry after 927.743369ms: waiting for machine to come up
	I1030 19:45:50.267844  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:50.268359  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:50.268390  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:50.268324  448432 retry.go:31] will retry after 829.540351ms: waiting for machine to come up
	I1030 19:45:51.099379  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:51.099863  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:51.099893  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:51.099820  448432 retry.go:31] will retry after 898.768304ms: waiting for machine to come up
	I1030 19:45:48.672337  446965 crio.go:462] duration metric: took 1.436158626s to copy over tarball
	I1030 19:45:48.672439  446965 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:45:50.859055  446965 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186572123s)
	I1030 19:45:50.859101  446965 crio.go:469] duration metric: took 2.186725028s to extract the tarball
	I1030 19:45:50.859113  446965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:45:50.896570  446965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:45:50.946526  446965 crio.go:514] all images are preloaded for cri-o runtime.
	I1030 19:45:50.946558  446965 cache_images.go:84] Images are preloaded, skipping loading
	I1030 19:45:50.946567  446965 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.31.2 crio true true} ...
	I1030 19:45:50.946668  446965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-042402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:45:50.946748  446965 ssh_runner.go:195] Run: crio config
	I1030 19:45:50.992305  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:50.992337  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:50.992348  446965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:45:50.992374  446965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-042402 NodeName:embed-certs-042402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:45:50.992530  446965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-042402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:45:50.992616  446965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:45:51.002586  446965 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:45:51.002668  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:45:51.012058  446965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1030 19:45:51.028645  446965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:45:51.044912  446965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1030 19:45:51.060991  446965 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I1030 19:45:51.064808  446965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:45:51.076790  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:45:51.205861  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:45:51.224763  446965 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402 for IP: 192.168.61.235
	I1030 19:45:51.224791  446965 certs.go:194] generating shared ca certs ...
	I1030 19:45:51.224812  446965 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:45:51.224986  446965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:45:51.225046  446965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:45:51.225059  446965 certs.go:256] generating profile certs ...
	I1030 19:45:51.225175  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/client.key
	I1030 19:45:51.225256  446965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key.f6f7691e
	I1030 19:45:51.225314  446965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key
	I1030 19:45:51.225469  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:45:51.225518  446965 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:45:51.225540  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:45:51.225574  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:45:51.225612  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:45:51.225651  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:45:51.225714  446965 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:45:51.226718  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:45:51.278345  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:45:51.308707  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:45:51.349986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:45:51.382176  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1030 19:45:51.426538  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 19:45:51.457131  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:45:51.481165  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/embed-certs-042402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:45:51.505285  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:45:51.533986  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:45:51.562660  446965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:45:51.586002  446965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:45:51.602544  446965 ssh_runner.go:195] Run: openssl version
	I1030 19:45:51.608479  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:45:51.620650  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625243  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.625294  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:45:51.631138  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:45:51.643167  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:45:51.655128  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659528  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.659600  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:45:51.665370  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:45:51.676314  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:45:51.687386  446965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692170  446965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.692228  446965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:45:51.697897  446965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:45:51.709561  446965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:45:51.715357  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:45:51.723291  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:45:51.731362  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:45:51.739724  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:45:51.747383  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:45:51.753472  446965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:45:51.759462  446965 kubeadm.go:392] StartCluster: {Name:embed-certs-042402 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-042402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:45:51.759605  446965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:45:51.759702  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.806863  446965 cri.go:89] found id: ""
	I1030 19:45:51.806956  446965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:45:51.818195  446965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:45:51.818218  446965 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:45:51.818274  446965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:45:51.828762  446965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:45:51.830149  446965 kubeconfig.go:125] found "embed-certs-042402" server: "https://192.168.61.235:8443"
	I1030 19:45:51.832269  446965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:45:51.842769  446965 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.235
	I1030 19:45:51.842808  446965 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:45:51.842823  446965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:45:51.842889  446965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:45:51.887128  446965 cri.go:89] found id: ""
	I1030 19:45:51.887209  446965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:45:51.911918  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:45:51.922685  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:45:51.922714  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:45:51.922770  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:45:51.935548  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:45:51.935620  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:45:51.948635  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:45:51.961647  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:45:51.961745  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:45:51.975880  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:45:51.986852  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:45:51.986922  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:45:52.001290  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:45:52.015249  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:45:52.015333  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:45:52.026657  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:45:52.038560  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:52.167697  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:50.408274  446887 pod_ready.go:103] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:51.407818  446887 pod_ready.go:93] pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.407850  446887 pod_ready.go:82] duration metric: took 3.006520689s for pod "etcd-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.407865  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413452  446887 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:51.413481  446887 pod_ready.go:82] duration metric: took 5.607077ms for pod "kube-apiserver-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:51.413495  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:52.000678  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:52.001196  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:52.001235  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:52.001148  448432 retry.go:31] will retry after 1.750749509s: waiting for machine to come up
	I1030 19:45:53.753607  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:53.754013  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:53.754038  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:53.753950  448432 retry.go:31] will retry after 1.537350682s: waiting for machine to come up
	I1030 19:45:55.293910  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:55.294396  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:55.294427  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:55.294336  448432 retry.go:31] will retry after 2.151521323s: waiting for machine to come up
	I1030 19:45:53.477258  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.309509141s)
	I1030 19:45:53.477309  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.696850  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.768419  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:53.863913  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:45:53.864018  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.364235  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.864820  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:45:54.887333  446965 api_server.go:72] duration metric: took 1.023419155s to wait for apiserver process to appear ...
	I1030 19:45:54.887363  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:45:54.887399  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:54.887929  446965 api_server.go:269] stopped: https://192.168.61.235:8443/healthz: Get "https://192.168.61.235:8443/healthz": dial tcp 192.168.61.235:8443: connect: connection refused
	I1030 19:45:55.388396  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.610916  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:45:57.610951  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:45:57.610972  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.745722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.745782  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:57.887887  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:57.895296  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:57.895352  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:54.167893  446887 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:54.920921  446887 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.920954  446887 pod_ready.go:82] duration metric: took 3.507449937s for pod "kube-controller-manager-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.920974  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927123  446887 pod_ready.go:93] pod "kube-proxy-tsr5q" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.927150  446887 pod_ready.go:82] duration metric: took 6.167749ms for pod "kube-proxy-tsr5q" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.927164  446887 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932513  446887 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace has status "Ready":"True"
	I1030 19:45:54.932540  446887 pod_ready.go:82] duration metric: took 5.367579ms for pod "kube-scheduler-default-k8s-diff-port-768989" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:54.932557  446887 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	I1030 19:45:56.939174  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.388076  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.393192  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:45:58.393235  446965 api_server.go:103] status: https://192.168.61.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:45:58.887710  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:45:58.891923  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:45:58.897783  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:45:58.897816  446965 api_server.go:131] duration metric: took 4.010443495s to wait for apiserver health ...
	I1030 19:45:58.897836  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:45:58.897844  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:45:58.899669  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:45:57.447894  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:57.448365  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:57.448392  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:57.448320  448432 retry.go:31] will retry after 2.439938206s: waiting for machine to come up
	I1030 19:45:59.889685  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:45:59.890166  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | unable to find current IP address of domain old-k8s-version-516975 in network mk-old-k8s-version-516975
	I1030 19:45:59.890205  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | I1030 19:45:59.890113  448432 retry.go:31] will retry after 3.836080386s: waiting for machine to come up
	I1030 19:45:58.901122  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:45:58.924765  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:45:58.946342  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:45:58.956378  446965 system_pods.go:59] 8 kube-system pods found
	I1030 19:45:58.956412  446965 system_pods.go:61] "coredns-7c65d6cfc9-tv6kc" [d752975e-e126-4d22-9b35-b9f57d1170b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:45:58.956419  446965 system_pods.go:61] "etcd-embed-certs-042402" [fa9b90f6-82b2-448a-ad86-9cbba45a4c2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:45:58.956427  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [48af3136-74d9-4062-bb9a-e48dafd311a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:45:58.956436  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [0ae60724-6634-464a-af2f-e08148fb3eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:45:58.956445  446965 system_pods.go:61] "kube-proxy-qwjr9" [309ee447-8d52-49e7-a805-2b7c0af2a3bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 19:45:58.956450  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [f82ff11e-8305-4d05-b370-fd89693e5ad1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:45:58.956454  446965 system_pods.go:61] "metrics-server-6867b74b74-4x9t6" [1160789d-9462-4d1d-9f84-5ded8394bd4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:45:58.956459  446965 system_pods.go:61] "storage-provisioner" [d1559440-b14a-4c2a-a52e-ba39afb01f94] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 19:45:58.956465  446965 system_pods.go:74] duration metric: took 10.103898ms to wait for pod list to return data ...
	I1030 19:45:58.956473  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:45:58.960150  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:45:58.960182  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:45:58.960195  446965 node_conditions.go:105] duration metric: took 3.712942ms to run NodePressure ...
	I1030 19:45:58.960219  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:45:59.284558  446965 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289073  446965 kubeadm.go:739] kubelet initialised
	I1030 19:45:59.289095  446965 kubeadm.go:740] duration metric: took 4.508144ms waiting for restarted kubelet to initialise ...
	I1030 19:45:59.289104  446965 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:45:59.293538  446965 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:01.298780  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:45:58.940597  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:01.439118  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.011617  446736 start.go:364] duration metric: took 52.494265895s to acquireMachinesLock for "no-preload-960512"
	I1030 19:46:05.011674  446736 start.go:96] Skipping create...Using existing machine configuration
	I1030 19:46:05.011683  446736 fix.go:54] fixHost starting: 
	I1030 19:46:05.012022  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:05.012087  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:05.029067  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I1030 19:46:05.029484  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:05.030010  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:05.030039  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:05.030461  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:05.030690  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:05.030854  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:05.032380  446736 fix.go:112] recreateIfNeeded on no-preload-960512: state=Stopped err=<nil>
	I1030 19:46:05.032408  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	W1030 19:46:05.032566  446736 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 19:46:05.035693  446736 out.go:177] * Restarting existing kvm2 VM for "no-preload-960512" ...
	I1030 19:46:03.727617  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728028  447486 main.go:141] libmachine: (old-k8s-version-516975) Found IP for machine: 192.168.50.250
	I1030 19:46:03.728046  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserving static IP address...
	I1030 19:46:03.728062  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has current primary IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.728565  447486 main.go:141] libmachine: (old-k8s-version-516975) Reserved static IP address: 192.168.50.250
	I1030 19:46:03.728600  447486 main.go:141] libmachine: (old-k8s-version-516975) Waiting for SSH to be available...
	I1030 19:46:03.728616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.728639  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | skip adding static IP to network mk-old-k8s-version-516975 - found existing host DHCP lease matching {name: "old-k8s-version-516975", mac: "52:54:00:46:32:46", ip: "192.168.50.250"}
	I1030 19:46:03.728657  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Getting to WaitForSSH function...
	I1030 19:46:03.730754  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731085  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.731121  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.731145  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH client type: external
	I1030 19:46:03.731212  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa (-rw-------)
	I1030 19:46:03.731252  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:03.731275  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | About to run SSH command:
	I1030 19:46:03.731289  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | exit 0
	I1030 19:46:03.862423  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:03.862832  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetConfigRaw
	I1030 19:46:03.863519  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:03.865977  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866262  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.866297  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.866512  447486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/config.json ...
	I1030 19:46:03.866755  447486 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:03.866783  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:03.866994  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.869079  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869384  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.869410  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.869603  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.869787  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.869949  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.870102  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.870243  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.870468  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.870481  447486 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:03.982986  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:03.983018  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983285  447486 buildroot.go:166] provisioning hostname "old-k8s-version-516975"
	I1030 19:46:03.983319  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:03.983502  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:03.986203  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986576  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:03.986615  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:03.986765  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:03.986983  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987126  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:03.987258  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:03.987419  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:03.987696  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:03.987719  447486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-516975 && echo "old-k8s-version-516975" | sudo tee /etc/hostname
	I1030 19:46:04.112692  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-516975
	
	I1030 19:46:04.112719  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.115948  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116283  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.116309  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.116482  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.116667  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116842  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.116966  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.117104  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.117275  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.117290  447486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-516975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-516975/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-516975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:04.235988  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:04.236032  447486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:04.236098  447486 buildroot.go:174] setting up certificates
	I1030 19:46:04.236111  447486 provision.go:84] configureAuth start
	I1030 19:46:04.236124  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetMachineName
	I1030 19:46:04.236500  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:04.239328  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.239707  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.239739  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.240009  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.242118  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242440  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.242505  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.242683  447486 provision.go:143] copyHostCerts
	I1030 19:46:04.242766  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:04.242787  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:04.242847  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:04.242972  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:04.242986  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:04.243011  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:04.243072  447486 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:04.243079  447486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:04.243095  447486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:04.243153  447486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-516975 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-516975]
	I1030 19:46:04.355003  447486 provision.go:177] copyRemoteCerts
	I1030 19:46:04.355061  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:04.355092  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.357788  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358153  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.358191  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.358397  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.358630  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.358809  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.358970  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.446614  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:04.473708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1030 19:46:04.497721  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 19:46:04.521806  447486 provision.go:87] duration metric: took 285.682041ms to configureAuth
	I1030 19:46:04.521836  447486 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:04.521999  447486 config.go:182] Loaded profile config "old-k8s-version-516975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1030 19:46:04.522072  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.524616  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525034  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.525065  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.525282  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.525452  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525616  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.525745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.525916  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.526129  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.526145  447486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:04.766663  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:04.766697  447486 machine.go:96] duration metric: took 899.924211ms to provisionDockerMachine
	I1030 19:46:04.766709  447486 start.go:293] postStartSetup for "old-k8s-version-516975" (driver="kvm2")
	I1030 19:46:04.766720  447486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:04.766745  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:04.767081  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:04.767114  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.769995  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770401  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.770428  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.770580  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.770762  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.770973  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.771132  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:04.858006  447486 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:04.862295  447486 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:04.862324  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:04.862387  447486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:04.862475  447486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:04.862612  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:04.872541  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:04.896306  447486 start.go:296] duration metric: took 129.577956ms for postStartSetup
	I1030 19:46:04.896360  447486 fix.go:56] duration metric: took 19.265077419s for fixHost
	I1030 19:46:04.896383  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:04.899009  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899397  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:04.899429  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:04.899538  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:04.899739  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.899906  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:04.900101  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:04.900271  447486 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:04.900510  447486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I1030 19:46:04.900525  447486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:05.011439  447486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317564.967936408
	
	I1030 19:46:05.011464  447486 fix.go:216] guest clock: 1730317564.967936408
	I1030 19:46:05.011472  447486 fix.go:229] Guest: 2024-10-30 19:46:04.967936408 +0000 UTC Remote: 2024-10-30 19:46:04.896364572 +0000 UTC m=+233.135558535 (delta=71.571836ms)
	I1030 19:46:05.011516  447486 fix.go:200] guest clock delta is within tolerance: 71.571836ms
	I1030 19:46:05.011525  447486 start.go:83] releasing machines lock for "old-k8s-version-516975", held for 19.380292064s
	I1030 19:46:05.011552  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.011853  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:05.014722  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015072  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.015100  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.015225  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.015808  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016002  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .DriverName
	I1030 19:46:05.016107  447486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:05.016155  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.016265  447486 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:05.016296  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHHostname
	I1030 19:46:05.018976  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019189  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019326  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019370  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019517  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019604  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:05.019632  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:05.019708  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.019830  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHPort
	I1030 19:46:05.019918  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.019995  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHKeyPath
	I1030 19:46:05.020077  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.020157  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetSSHUsername
	I1030 19:46:05.020295  447486 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/old-k8s-version-516975/id_rsa Username:docker}
	I1030 19:46:05.100852  447486 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:05.127673  447486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:05.279889  447486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:05.285900  447486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:05.285976  447486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:05.304763  447486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:05.304791  447486 start.go:495] detecting cgroup driver to use...
	I1030 19:46:05.304862  447486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:05.325729  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:05.343047  447486 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:05.343128  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:05.358748  447486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:05.374769  447486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:05.492589  447486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:05.639943  447486 docker.go:233] disabling docker service ...
	I1030 19:46:05.640039  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:05.655449  447486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:05.669688  447486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:05.814658  447486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:05.957944  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:05.972122  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:05.990577  447486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 19:46:05.990653  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.000834  447486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:06.000907  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.011678  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.022051  447486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:06.032515  447486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:06.043296  447486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:06.053123  447486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:06.053170  447486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:06.067625  447486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:06.081306  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:06.221181  447486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:06.321848  447486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:06.321926  447486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:06.329697  447486 start.go:563] Will wait 60s for crictl version
	I1030 19:46:06.329757  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:06.333980  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:06.381198  447486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:06.381290  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.410365  447486 ssh_runner.go:195] Run: crio --version
	I1030 19:46:06.442329  447486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1030 19:46:06.443471  447486 main.go:141] libmachine: (old-k8s-version-516975) Calling .GetIP
	I1030 19:46:06.446233  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446621  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:32:46", ip: ""} in network mk-old-k8s-version-516975: {Iface:virbr2 ExpiryTime:2024-10-30 20:45:57 +0000 UTC Type:0 Mac:52:54:00:46:32:46 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-516975 Clientid:01:52:54:00:46:32:46}
	I1030 19:46:06.446653  447486 main.go:141] libmachine: (old-k8s-version-516975) DBG | domain old-k8s-version-516975 has defined IP address 192.168.50.250 and MAC address 52:54:00:46:32:46 in network mk-old-k8s-version-516975
	I1030 19:46:06.446822  447486 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:06.451216  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:06.464477  447486 kubeadm.go:883] updating cluster {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:06.464607  447486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 19:46:06.464668  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:06.513123  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:06.513205  447486 ssh_runner.go:195] Run: which lz4
	I1030 19:46:06.517252  447486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 19:46:06.521358  447486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 19:46:06.521384  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1030 19:46:03.300213  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.301139  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.303015  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:03.939240  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.940212  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:07.942062  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:05.037179  446736 main.go:141] libmachine: (no-preload-960512) Calling .Start
	I1030 19:46:05.037388  446736 main.go:141] libmachine: (no-preload-960512) Ensuring networks are active...
	I1030 19:46:05.038384  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network default is active
	I1030 19:46:05.038793  446736 main.go:141] libmachine: (no-preload-960512) Ensuring network mk-no-preload-960512 is active
	I1030 19:46:05.039208  446736 main.go:141] libmachine: (no-preload-960512) Getting domain xml...
	I1030 19:46:05.040083  446736 main.go:141] libmachine: (no-preload-960512) Creating domain...
	I1030 19:46:06.366674  446736 main.go:141] libmachine: (no-preload-960512) Waiting to get IP...
	I1030 19:46:06.367568  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.368016  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.368083  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.367984  448568 retry.go:31] will retry after 216.900908ms: waiting for machine to come up
	I1030 19:46:06.586638  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.587182  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.587213  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.587121  448568 retry.go:31] will retry after 319.082011ms: waiting for machine to come up
	I1030 19:46:06.907974  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:06.908650  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:06.908683  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:06.908581  448568 retry.go:31] will retry after 418.339306ms: waiting for machine to come up
	I1030 19:46:07.328241  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.329035  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.329065  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.328988  448568 retry.go:31] will retry after 523.624135ms: waiting for machine to come up
	I1030 19:46:07.855234  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:07.855944  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:07.855970  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:07.855849  448568 retry.go:31] will retry after 556.06146ms: waiting for machine to come up
	I1030 19:46:08.413474  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:08.414059  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:08.414098  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:08.413947  448568 retry.go:31] will retry after 713.043389ms: waiting for machine to come up
	I1030 19:46:09.128274  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:09.128737  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:09.128762  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:09.128689  448568 retry.go:31] will retry after 1.096111238s: waiting for machine to come up
	I1030 19:46:08.144772  447486 crio.go:462] duration metric: took 1.627547543s to copy over tarball
	I1030 19:46:08.144845  447486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 19:46:11.104192  447486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959302647s)
	I1030 19:46:11.104228  447486 crio.go:469] duration metric: took 2.959426051s to extract the tarball
	I1030 19:46:11.104240  447486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 19:46:11.146584  447486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:11.183766  447486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1030 19:46:11.183797  447486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:11.183889  447486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.183917  447486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.183932  447486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.183968  447486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.184087  447486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.183972  447486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1030 19:46:11.183969  447486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.183928  447486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.185976  447486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:11.186001  447486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 19:46:11.186043  447486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.186053  447486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.186046  447486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.185977  447486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.186108  447486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.186150  447486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.348134  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391191  447486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1030 19:46:11.391327  447486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.391399  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.396693  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.400062  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.406656  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 19:46:11.410534  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.410590  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.441896  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.460400  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.482465  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.554431  447486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1030 19:46:11.554480  447486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.554549  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.610376  447486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 19:46:11.610424  447486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 19:46:11.610471  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616060  447486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1030 19:46:11.616104  447486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.616153  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.616177  447486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1030 19:46:11.616217  447486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.616282  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.617473  447486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1030 19:46:11.617502  447486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.617535  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652124  447486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1030 19:46:11.652185  447486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.652228  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1030 19:46:11.652233  447486 ssh_runner.go:195] Run: which crictl
	I1030 19:46:11.652237  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.652331  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.652376  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.652433  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.652483  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:11.798844  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.798859  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1030 19:46:11.798873  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.798949  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.799075  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.799179  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.799182  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:08.303450  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.303482  446965 pod_ready.go:82] duration metric: took 9.009918893s for pod "coredns-7c65d6cfc9-tv6kc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.303498  446965 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312186  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:08.312213  446965 pod_ready.go:82] duration metric: took 8.706192ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:08.312228  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:10.320161  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.439107  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:12.439663  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:10.226842  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:10.227315  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:10.227346  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:10.227261  448568 retry.go:31] will retry after 1.165335625s: waiting for machine to come up
	I1030 19:46:11.394231  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:11.394817  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:11.394851  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:11.394763  448568 retry.go:31] will retry after 1.292571083s: waiting for machine to come up
	I1030 19:46:12.688486  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:12.688919  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:12.688965  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:12.688862  448568 retry.go:31] will retry after 1.97645889s: waiting for machine to come up
	I1030 19:46:14.667783  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:14.668245  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:14.668278  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:14.668200  448568 retry.go:31] will retry after 2.020488863s: waiting for machine to come up
	I1030 19:46:11.942258  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:11.942265  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 19:46:11.942365  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1030 19:46:11.942352  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1030 19:46:11.942421  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1030 19:46:11.946933  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1030 19:46:12.064951  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 19:46:12.067930  447486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1030 19:46:12.067990  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1030 19:46:12.068057  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1030 19:46:12.068078  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1030 19:46:12.083122  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1030 19:46:12.107265  447486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1030 19:46:13.402970  447486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:13.551979  447486 cache_images.go:92] duration metric: took 2.368158873s to LoadCachedImages
	W1030 19:46:13.552080  447486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1030 19:46:13.552096  447486 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I1030 19:46:13.552211  447486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-516975 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:13.552276  447486 ssh_runner.go:195] Run: crio config
	I1030 19:46:13.605982  447486 cni.go:84] Creating CNI manager for ""
	I1030 19:46:13.606008  447486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:13.606020  447486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:13.606049  447486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-516975 NodeName:old-k8s-version-516975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 19:46:13.606223  447486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-516975"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:13.606299  447486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1030 19:46:13.616954  447486 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:13.617034  447486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:13.627440  447486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1030 19:46:13.644821  447486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:13.662070  447486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1030 19:46:13.679198  447486 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:13.682992  447486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:13.697879  447486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:13.819975  447486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:13.838669  447486 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975 for IP: 192.168.50.250
	I1030 19:46:13.838695  447486 certs.go:194] generating shared ca certs ...
	I1030 19:46:13.838716  447486 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:13.838888  447486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:13.838946  447486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:13.838962  447486 certs.go:256] generating profile certs ...
	I1030 19:46:13.839064  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/client.key
	I1030 19:46:13.839149  447486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key.685bdf3e
	I1030 19:46:13.839208  447486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key
	I1030 19:46:13.839375  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:13.839429  447486 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:13.839442  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:13.839476  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:13.839509  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:13.839545  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:13.839609  447486 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:13.840381  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:13.868947  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:13.923848  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:13.973167  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:14.009333  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 19:46:14.042397  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:14.073927  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:14.109209  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/old-k8s-version-516975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 19:46:14.135708  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:14.162145  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:14.186176  447486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:14.210362  447486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:14.228727  447486 ssh_runner.go:195] Run: openssl version
	I1030 19:46:14.234436  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:14.245497  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250026  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.250077  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:14.255727  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:14.266674  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:14.277813  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282378  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.282435  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:14.288338  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:14.300057  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:14.312295  447486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317488  447486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.317555  447486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:14.323518  447486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:14.335182  447486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:14.339998  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:14.346145  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:14.352474  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:14.358687  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:14.364275  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:14.370038  447486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:14.376051  447486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-516975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-516975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:14.376144  447486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:14.376187  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.423395  447486 cri.go:89] found id: ""
	I1030 19:46:14.423477  447486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:14.435404  447486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:14.435485  447486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:14.435558  447486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:14.448035  447486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:14.448911  447486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-516975" does not appear in /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:14.449557  447486 kubeconfig.go:62] /home/jenkins/minikube-integration/19883-381834/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-516975" cluster setting kubeconfig missing "old-k8s-version-516975" context setting]
	I1030 19:46:14.450419  447486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:14.452252  447486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:14.462634  447486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I1030 19:46:14.462676  447486 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:14.462693  447486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:14.462750  447486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:14.508286  447486 cri.go:89] found id: ""
	I1030 19:46:14.508380  447486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:14.527996  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:14.539011  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:14.539037  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:14.539094  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:14.550159  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:14.550243  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:14.561350  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:14.571353  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:14.571430  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:14.584480  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.598307  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:14.598400  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:14.611632  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:14.621644  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:14.621705  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:14.632161  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:14.642295  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:14.783130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.694839  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:15.923329  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.052124  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:16.143607  447486 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:16.143710  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:16.643943  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:13.245727  446965 pod_ready.go:103] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:13.702440  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.702472  446965 pod_ready.go:82] duration metric: took 5.390235543s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.702497  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948519  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.948549  446965 pod_ready.go:82] duration metric: took 246.042214ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.948565  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958077  446965 pod_ready.go:93] pod "kube-proxy-qwjr9" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.958108  446965 pod_ready.go:82] duration metric: took 9.534813ms for pod "kube-proxy-qwjr9" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.958122  446965 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974906  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:46:13.974931  446965 pod_ready.go:82] duration metric: took 16.800547ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:13.974944  446965 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:15.982433  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:17.983261  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:14.440176  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.939769  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:16.690435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:16.690908  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:16.690997  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:16.690904  448568 retry.go:31] will retry after 2.729556206s: waiting for machine to come up
	I1030 19:46:19.423740  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:19.424246  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:19.424271  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:19.424195  448568 retry.go:31] will retry after 2.822049517s: waiting for machine to come up
	I1030 19:46:17.144678  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:17.644772  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.144037  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:18.644437  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.144273  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:19.643801  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.144200  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.644764  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.143898  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:21.643960  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:20.481213  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.981619  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:19.438946  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:21.938706  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:22.247395  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:22.247840  446736 main.go:141] libmachine: (no-preload-960512) DBG | unable to find current IP address of domain no-preload-960512 in network mk-no-preload-960512
	I1030 19:46:22.247869  446736 main.go:141] libmachine: (no-preload-960512) DBG | I1030 19:46:22.247813  448568 retry.go:31] will retry after 5.243633747s: waiting for machine to come up
	I1030 19:46:22.144625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:22.644446  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.144207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:23.644001  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.143787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:24.644166  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.144397  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.644654  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.144214  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:26.644275  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:25.482032  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.981111  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:23.940402  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:26.439369  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:27.494630  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495107  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has current primary IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.495146  446736 main.go:141] libmachine: (no-preload-960512) Found IP for machine: 192.168.72.132
	I1030 19:46:27.495159  446736 main.go:141] libmachine: (no-preload-960512) Reserving static IP address...
	I1030 19:46:27.495588  446736 main.go:141] libmachine: (no-preload-960512) Reserved static IP address: 192.168.72.132
	I1030 19:46:27.495612  446736 main.go:141] libmachine: (no-preload-960512) Waiting for SSH to be available...
	I1030 19:46:27.495634  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.495664  446736 main.go:141] libmachine: (no-preload-960512) DBG | skip adding static IP to network mk-no-preload-960512 - found existing host DHCP lease matching {name: "no-preload-960512", mac: "52:54:00:71:5b:b2", ip: "192.168.72.132"}
	I1030 19:46:27.495678  446736 main.go:141] libmachine: (no-preload-960512) DBG | Getting to WaitForSSH function...
	I1030 19:46:27.497679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498051  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.498083  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.498231  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH client type: external
	I1030 19:46:27.498273  446736 main.go:141] libmachine: (no-preload-960512) DBG | Using SSH private key: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa (-rw-------)
	I1030 19:46:27.498316  446736 main.go:141] libmachine: (no-preload-960512) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 19:46:27.498344  446736 main.go:141] libmachine: (no-preload-960512) DBG | About to run SSH command:
	I1030 19:46:27.498355  446736 main.go:141] libmachine: (no-preload-960512) DBG | exit 0
	I1030 19:46:27.626476  446736 main.go:141] libmachine: (no-preload-960512) DBG | SSH cmd err, output: <nil>: 
	I1030 19:46:27.626850  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetConfigRaw
	I1030 19:46:27.627519  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:27.629913  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630288  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.630326  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.630561  446736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/config.json ...
	I1030 19:46:27.630778  446736 machine.go:93] provisionDockerMachine start ...
	I1030 19:46:27.630801  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:27.631021  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.633457  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.633849  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.633880  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.634032  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.634200  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634393  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.634564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.634741  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.634940  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.634952  446736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 19:46:27.743135  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 19:46:27.743167  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743475  446736 buildroot.go:166] provisioning hostname "no-preload-960512"
	I1030 19:46:27.743516  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.743717  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.746369  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746726  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.746758  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.746928  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.747114  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747266  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.747380  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.747509  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.747740  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.747759  446736 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-960512 && echo "no-preload-960512" | sudo tee /etc/hostname
	I1030 19:46:27.872871  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-960512
	
	I1030 19:46:27.872899  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:27.875533  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.875867  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:27.875908  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:27.876072  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:27.876274  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876546  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:27.876690  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:27.876851  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:27.877082  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:27.877099  446736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-960512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-960512/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-960512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 19:46:27.999551  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 19:46:27.999617  446736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19883-381834/.minikube CaCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19883-381834/.minikube}
	I1030 19:46:27.999654  446736 buildroot.go:174] setting up certificates
	I1030 19:46:27.999667  446736 provision.go:84] configureAuth start
	I1030 19:46:27.999689  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetMachineName
	I1030 19:46:27.999998  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.002874  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003285  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.003317  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.003474  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.005987  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006376  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.006418  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.006545  446736 provision.go:143] copyHostCerts
	I1030 19:46:28.006620  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem, removing ...
	I1030 19:46:28.006639  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem
	I1030 19:46:28.006707  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/ca.pem (1082 bytes)
	I1030 19:46:28.006846  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem, removing ...
	I1030 19:46:28.006859  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem
	I1030 19:46:28.006898  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/cert.pem (1123 bytes)
	I1030 19:46:28.006983  446736 exec_runner.go:144] found /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem, removing ...
	I1030 19:46:28.006993  446736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem
	I1030 19:46:28.007023  446736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19883-381834/.minikube/key.pem (1679 bytes)
	I1030 19:46:28.007102  446736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem org=jenkins.no-preload-960512 san=[127.0.0.1 192.168.72.132 localhost minikube no-preload-960512]
	I1030 19:46:28.317424  446736 provision.go:177] copyRemoteCerts
	I1030 19:46:28.317502  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 19:46:28.317537  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.320089  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320387  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.320419  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.320564  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.320776  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.320963  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.321116  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.409344  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 19:46:28.434874  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 19:46:28.459903  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1030 19:46:28.486949  446736 provision.go:87] duration metric: took 487.261556ms to configureAuth
	I1030 19:46:28.486981  446736 buildroot.go:189] setting minikube options for container-runtime
	I1030 19:46:28.487219  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:28.487322  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.489873  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490180  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.490223  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.490349  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.490561  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490719  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.490827  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.491003  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.491199  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.491216  446736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 19:46:28.727045  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 19:46:28.727081  446736 machine.go:96] duration metric: took 1.096287528s to provisionDockerMachine
	I1030 19:46:28.727095  446736 start.go:293] postStartSetup for "no-preload-960512" (driver="kvm2")
	I1030 19:46:28.727106  446736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 19:46:28.727125  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.727460  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 19:46:28.727490  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.730071  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730445  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.730479  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.730652  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.730858  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.731010  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.731197  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.817529  446736 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 19:46:28.822263  446736 info.go:137] Remote host: Buildroot 2023.02.9
	I1030 19:46:28.822299  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/addons for local assets ...
	I1030 19:46:28.822394  446736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19883-381834/.minikube/files for local assets ...
	I1030 19:46:28.822517  446736 filesync.go:149] local asset: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem -> 3891442.pem in /etc/ssl/certs
	I1030 19:46:28.822647  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 19:46:28.832488  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:28.858165  446736 start.go:296] duration metric: took 131.055053ms for postStartSetup
	I1030 19:46:28.858211  446736 fix.go:56] duration metric: took 23.84652817s for fixHost
	I1030 19:46:28.858235  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.861136  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861480  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.861513  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.861819  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.862059  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862224  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.862373  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.862582  446736 main.go:141] libmachine: Using SSH client type: native
	I1030 19:46:28.862786  446736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I1030 19:46:28.862797  446736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 19:46:28.975448  446736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730317588.951806388
	
	I1030 19:46:28.975479  446736 fix.go:216] guest clock: 1730317588.951806388
	I1030 19:46:28.975489  446736 fix.go:229] Guest: 2024-10-30 19:46:28.951806388 +0000 UTC Remote: 2024-10-30 19:46:28.858215114 +0000 UTC m=+358.930371017 (delta=93.591274ms)
	I1030 19:46:28.975521  446736 fix.go:200] guest clock delta is within tolerance: 93.591274ms
	I1030 19:46:28.975529  446736 start.go:83] releasing machines lock for "no-preload-960512", held for 23.963879546s
	I1030 19:46:28.975555  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.975849  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:28.978813  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979310  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.979341  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.979608  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980197  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980429  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:28.980522  446736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 19:46:28.980567  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.980682  446736 ssh_runner.go:195] Run: cat /version.json
	I1030 19:46:28.980710  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:28.984058  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984208  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984410  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984435  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984582  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984613  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:28.984636  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:28.984782  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.984798  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:28.984966  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.984974  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:28.985121  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:28.985187  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:28.985260  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:29.063734  446736 ssh_runner.go:195] Run: systemctl --version
	I1030 19:46:29.087821  446736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 19:46:29.236289  446736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 19:46:29.242997  446736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 19:46:29.243088  446736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 19:46:29.260802  446736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 19:46:29.260836  446736 start.go:495] detecting cgroup driver to use...
	I1030 19:46:29.260930  446736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 19:46:29.279572  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 19:46:29.293359  446736 docker.go:217] disabling cri-docker service (if available) ...
	I1030 19:46:29.293423  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 19:46:29.306417  446736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 19:46:29.319617  446736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 19:46:29.440023  446736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 19:46:29.585541  446736 docker.go:233] disabling docker service ...
	I1030 19:46:29.585630  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 19:46:29.600459  446736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 19:46:29.613611  446736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 19:46:29.752666  446736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 19:46:29.880152  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 19:46:29.893912  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 19:46:29.913099  446736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1030 19:46:29.913160  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.923800  446736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 19:46:29.923882  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.934880  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.946088  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.956644  446736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 19:46:29.967199  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.978863  446736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:29.996225  446736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 19:46:30.006604  446736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 19:46:30.015954  446736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 19:46:30.016017  446736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 19:46:30.029194  446736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 19:46:30.041316  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:30.161438  446736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 19:46:30.257137  446736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 19:46:30.257209  446736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 19:46:30.261981  446736 start.go:563] Will wait 60s for crictl version
	I1030 19:46:30.262052  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.266275  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 19:46:30.305128  446736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1030 19:46:30.305228  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.335445  446736 ssh_runner.go:195] Run: crio --version
	I1030 19:46:30.367026  446736 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1030 19:46:27.143768  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:27.644294  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.143819  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:28.643783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.144405  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.643941  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:30.644787  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.143873  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:31.643857  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:29.982162  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:32.480878  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:28.939126  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.939780  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:30.368355  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetIP
	I1030 19:46:30.371260  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371651  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:30.371679  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:30.371922  446736 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1030 19:46:30.376282  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:30.389078  446736 kubeadm.go:883] updating cluster {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1030 19:46:30.389193  446736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 19:46:30.389228  446736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 19:46:30.423375  446736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1030 19:46:30.423402  446736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 19:46:30.423508  446736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.423562  446736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.423578  446736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.423595  446736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.423536  446736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.423511  446736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.423634  446736 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424979  446736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.424988  446736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.424996  446736 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1030 19:46:30.424987  446736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:30.425021  446736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.425036  446736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.425029  446736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.425061  446736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.612665  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.618602  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1030 19:46:30.636563  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.680808  446736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1030 19:46:30.680858  446736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.680911  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.749318  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.750405  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.751514  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.752746  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.768614  446736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1030 19:46:30.768663  446736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.768714  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.768723  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.881778  446736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1030 19:46:30.881811  446736 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1030 19:46:30.881821  446736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.881844  446736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.881862  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.881883  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.884827  446736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1030 19:46:30.884861  446736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.884901  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891812  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.891882  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.891907  446736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1030 19:46:30.891940  446736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.891981  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:30.891986  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.892142  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.893781  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:30.992346  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1030 19:46:30.992372  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:30.992404  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:30.995602  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:30.995730  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:30.995786  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.123892  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1030 19:46:31.123996  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 19:46:31.124018  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.132177  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.132209  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1030 19:46:31.132311  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1030 19:46:31.132335  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1030 19:46:31.220011  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1030 19:46:31.220043  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220100  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1030 19:46:31.220224  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1030 19:46:31.220329  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:31.262583  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1030 19:46:31.262685  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1030 19:46:31.262698  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:31.269015  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1030 19:46:31.269117  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:31.269710  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1030 19:46:31.269793  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:32.667341  446736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.216743  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (1.99661544s)
	I1030 19:46:33.216787  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1030 19:46:33.216787  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.996433716s)
	I1030 19:46:33.216820  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1030 19:46:33.216829  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216840  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.95412356s)
	I1030 19:46:33.216872  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1030 19:46:33.216884  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1030 19:46:33.216925  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2: (1.954216284s)
	I1030 19:46:33.216964  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1030 19:46:33.216989  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.947854262s)
	I1030 19:46:33.217014  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1030 19:46:33.217027  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.947220506s)
	I1030 19:46:33.217040  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1030 19:46:33.217059  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:33.217140  446736 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1030 19:46:33.217178  446736 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:33.217222  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:46:32.144229  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:32.644079  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:33.643950  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.143888  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.643861  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.144210  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:35.644677  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:36.644549  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:34.481488  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:36.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:33.438659  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:37.440028  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:35.577178  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.360267806s)
	I1030 19:46:35.577219  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1030 19:46:35.577227  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.360144583s)
	I1030 19:46:35.577243  446736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.577252  446736 ssh_runner.go:235] Completed: which crictl: (2.360017291s)
	I1030 19:46:35.577259  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1030 19:46:35.577305  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:35.577309  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1030 19:46:35.615490  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492071  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.914649003s)
	I1030 19:46:39.492116  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1030 19:46:39.492142  446736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.876615301s)
	I1030 19:46:39.492211  446736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:39.492148  446736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.492295  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1030 19:46:39.535258  446736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 19:46:39.535417  446736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:37.144681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:37.643833  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.143783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:38.644359  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.144745  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.644625  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.144535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:40.643881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.144754  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:41.644070  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:39.302627  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.480981  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:39.940272  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:42.439827  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:41.566095  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.073767908s)
	I1030 19:46:41.566140  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1030 19:46:41.566167  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566169  446736 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.030723752s)
	I1030 19:46:41.566210  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1030 19:46:41.566224  446736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1030 19:46:43.628473  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.06223599s)
	I1030 19:46:43.628500  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1030 19:46:43.628525  446736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:43.628570  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1030 19:46:42.144672  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:42.644533  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.144320  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.644574  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.144465  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:44.644428  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.143785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:45.643767  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.144467  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:46.644496  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:43.481495  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.481844  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.982318  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:44.940061  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:47.439131  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:45.079808  446736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451207821s)
	I1030 19:46:45.079843  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1030 19:46:45.079870  446736 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:45.079918  446736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1030 19:46:46.026472  446736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19883-381834/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 19:46:46.026538  446736 cache_images.go:123] Successfully loaded all cached images
	I1030 19:46:46.026547  446736 cache_images.go:92] duration metric: took 15.603128567s to LoadCachedImages
	I1030 19:46:46.026562  446736 kubeadm.go:934] updating node { 192.168.72.132 8443 v1.31.2 crio true true} ...
	I1030 19:46:46.026722  446736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-960512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 19:46:46.026819  446736 ssh_runner.go:195] Run: crio config
	I1030 19:46:46.080342  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:46.080367  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:46.080376  446736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 19:46:46.080399  446736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-960512 NodeName:no-preload-960512 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 19:46:46.080574  446736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-960512"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 19:46:46.080645  446736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1030 19:46:46.091323  446736 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 19:46:46.091400  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 19:46:46.100320  446736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1030 19:46:46.117369  446736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 19:46:46.133667  446736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1030 19:46:46.157251  446736 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I1030 19:46:46.161543  446736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 19:46:46.173451  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:46.303532  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:46.321855  446736 certs.go:68] Setting up /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512 for IP: 192.168.72.132
	I1030 19:46:46.321883  446736 certs.go:194] generating shared ca certs ...
	I1030 19:46:46.321905  446736 certs.go:226] acquiring lock for ca certs: {Name:mk7846d2584162eb06783c46944563970e4e21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:46.322108  446736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key
	I1030 19:46:46.322171  446736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key
	I1030 19:46:46.322189  446736 certs.go:256] generating profile certs ...
	I1030 19:46:46.322294  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/client.key
	I1030 19:46:46.322381  446736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key.378d6029
	I1030 19:46:46.322436  446736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key
	I1030 19:46:46.322609  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem (1338 bytes)
	W1030 19:46:46.322649  446736 certs.go:480] ignoring /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144_empty.pem, impossibly tiny 0 bytes
	I1030 19:46:46.322661  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 19:46:46.322692  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/ca.pem (1082 bytes)
	I1030 19:46:46.322727  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/cert.pem (1123 bytes)
	I1030 19:46:46.322756  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/certs/key.pem (1679 bytes)
	I1030 19:46:46.322812  446736 certs.go:484] found cert: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem (1708 bytes)
	I1030 19:46:46.323679  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 19:46:46.362339  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 19:46:46.396270  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 19:46:46.443482  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 19:46:46.468142  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1030 19:46:46.507418  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 19:46:46.534091  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 19:46:46.557105  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/no-preload-960512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 19:46:46.579880  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 19:46:46.602665  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/certs/389144.pem --> /usr/share/ca-certificates/389144.pem (1338 bytes)
	I1030 19:46:46.625853  446736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/ssl/certs/3891442.pem --> /usr/share/ca-certificates/3891442.pem (1708 bytes)
	I1030 19:46:46.651685  446736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 19:46:46.670898  446736 ssh_runner.go:195] Run: openssl version
	I1030 19:46:46.677083  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3891442.pem && ln -fs /usr/share/ca-certificates/3891442.pem /etc/ssl/certs/3891442.pem"
	I1030 19:46:46.688814  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693349  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:35 /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.693399  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3891442.pem
	I1030 19:46:46.699221  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3891442.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 19:46:46.710200  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 19:46:46.721001  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725283  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.725343  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 19:46:46.730798  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 19:46:46.741915  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/389144.pem && ln -fs /usr/share/ca-certificates/389144.pem /etc/ssl/certs/389144.pem"
	I1030 19:46:46.752767  446736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757109  446736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:35 /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.757150  446736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/389144.pem
	I1030 19:46:46.762844  446736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/389144.pem /etc/ssl/certs/51391683.0"
	I1030 19:46:46.773796  446736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 19:46:46.778156  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 19:46:46.784099  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 19:46:46.789960  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 19:46:46.796056  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 19:46:46.801880  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 19:46:46.807680  446736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 19:46:46.813574  446736 kubeadm.go:392] StartCluster: {Name:no-preload-960512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-960512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 19:46:46.813694  446736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 19:46:46.813735  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.856225  446736 cri.go:89] found id: ""
	I1030 19:46:46.856309  446736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 19:46:46.866696  446736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 19:46:46.866721  446736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 19:46:46.866774  446736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 19:46:46.876622  446736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:46:46.877777  446736 kubeconfig.go:125] found "no-preload-960512" server: "https://192.168.72.132:8443"
	I1030 19:46:46.880116  446736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 19:46:46.889710  446736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.132
	I1030 19:46:46.889743  446736 kubeadm.go:1160] stopping kube-system containers ...
	I1030 19:46:46.889761  446736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 19:46:46.889837  446736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 19:46:46.927109  446736 cri.go:89] found id: ""
	I1030 19:46:46.927177  446736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 19:46:46.944519  446736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:46:46.954607  446736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:46:46.954626  446736 kubeadm.go:157] found existing configuration files:
	
	I1030 19:46:46.954669  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:46:46.963987  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:46:46.964086  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:46:46.973787  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:46:46.983447  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:46:46.983496  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:46:46.993101  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.003713  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:46:47.003773  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:46:47.013162  446736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:46:47.022411  446736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:46:47.022479  446736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:46:47.031878  446736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:46:47.041616  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:47.156846  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.637250  446736 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.480364831s)
	I1030 19:46:48.637284  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.836676  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.908664  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:48.987298  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:46:48.987411  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.488330  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.143932  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:47.644228  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.144124  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:48.643923  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.144466  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.643968  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.144811  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.643785  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.144372  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:51.644019  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:49.983127  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.482250  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.939257  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:52.439840  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:49.988463  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:50.024092  446736 api_server.go:72] duration metric: took 1.036791371s to wait for apiserver process to appear ...
	I1030 19:46:50.024127  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:46:50.024155  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:50.024711  446736 api_server.go:269] stopped: https://192.168.72.132:8443/healthz: Get "https://192.168.72.132:8443/healthz": dial tcp 192.168.72.132:8443: connect: connection refused
	I1030 19:46:50.524543  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.757497  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 19:46:52.757537  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 19:46:52.757558  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:52.847598  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:52.847638  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.024885  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.030717  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.030749  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:53.524384  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:53.531420  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:53.531459  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.025006  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.030512  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.030545  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:54.525157  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:54.529426  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:54.529453  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.025276  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.029608  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.029634  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:55.525041  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:55.529303  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1030 19:46:55.529339  446736 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1030 19:46:56.024906  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:46:56.029520  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:46:56.035579  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:46:56.035609  446736 api_server.go:131] duration metric: took 6.011468992s to wait for apiserver health ...
	I1030 19:46:56.035619  446736 cni.go:84] Creating CNI manager for ""
	I1030 19:46:56.035625  446736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:46:56.037524  446736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:46:52.144732  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:52.644528  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.144074  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:53.643889  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.143976  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:54.644535  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.144783  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:55.644114  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.144728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.643846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:56.038963  446736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:46:56.050330  446736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:46:56.069509  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:46:56.079237  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:46:56.079268  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 19:46:56.079275  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 19:46:56.079283  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 19:46:56.079288  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 19:46:56.079294  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:46:56.079299  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 19:46:56.079304  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:46:56.079307  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:46:56.079313  446736 system_pods.go:74] duration metric: took 9.785027ms to wait for pod list to return data ...
	I1030 19:46:56.079327  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:46:56.082617  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:46:56.082644  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:46:56.082658  446736 node_conditions.go:105] duration metric: took 3.325744ms to run NodePressure ...
	I1030 19:46:56.082680  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 19:46:56.353123  446736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357714  446736 kubeadm.go:739] kubelet initialised
	I1030 19:46:56.357740  446736 kubeadm.go:740] duration metric: took 4.581883ms waiting for restarted kubelet to initialise ...
	I1030 19:46:56.357755  446736 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:56.362687  446736 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.367124  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367153  446736 pod_ready.go:82] duration metric: took 4.443081ms for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.367165  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.367180  446736 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.371747  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371774  446736 pod_ready.go:82] duration metric: took 4.580967ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.371785  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "etcd-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.371794  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.375687  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375704  446736 pod_ready.go:82] duration metric: took 3.901023ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.375712  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-apiserver-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.375718  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.472995  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473036  446736 pod_ready.go:82] duration metric: took 97.300344ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.473047  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.473056  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:56.873717  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873749  446736 pod_ready.go:82] duration metric: took 400.680615ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:56.873759  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-proxy-fxqqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:56.873765  446736 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.273361  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273392  446736 pod_ready.go:82] duration metric: took 399.61983ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.273405  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "kube-scheduler-no-preload-960512" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.273415  446736 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:46:57.674201  446736 pod_ready.go:98] node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674236  446736 pod_ready.go:82] duration metric: took 400.809663ms for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:46:57.674251  446736 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-960512" hosting pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.674260  446736 pod_ready.go:39] duration metric: took 1.31649331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:46:57.674285  446736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:46:57.687464  446736 ops.go:34] apiserver oom_adj: -16
	I1030 19:46:57.687489  446736 kubeadm.go:597] duration metric: took 10.820761471s to restartPrimaryControlPlane
	I1030 19:46:57.687498  446736 kubeadm.go:394] duration metric: took 10.873934509s to StartCluster
	I1030 19:46:57.687514  446736 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.687586  446736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:46:57.689255  446736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:46:57.689496  446736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:46:57.689574  446736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:46:57.689683  446736 addons.go:69] Setting storage-provisioner=true in profile "no-preload-960512"
	I1030 19:46:57.689706  446736 addons.go:234] Setting addon storage-provisioner=true in "no-preload-960512"
	I1030 19:46:57.689708  446736 addons.go:69] Setting metrics-server=true in profile "no-preload-960512"
	W1030 19:46:57.689719  446736 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:46:57.689727  446736 addons.go:234] Setting addon metrics-server=true in "no-preload-960512"
	W1030 19:46:57.689737  446736 addons.go:243] addon metrics-server should already be in state true
	I1030 19:46:57.689755  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689791  446736 config.go:182] Loaded profile config "no-preload-960512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:46:57.689761  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.689707  446736 addons.go:69] Setting default-storageclass=true in profile "no-preload-960512"
	I1030 19:46:57.689912  446736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-960512"
	I1030 19:46:57.690245  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690258  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690264  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.690297  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690303  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.690322  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.691365  446736 out.go:177] * Verifying Kubernetes components...
	I1030 19:46:57.692941  446736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:46:57.727794  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I1030 19:46:57.727877  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I1030 19:46:57.728127  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1030 19:46:57.728276  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728414  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728517  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.728861  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.728879  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729032  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729053  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.729056  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729064  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.729350  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729429  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.729452  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.730008  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730051  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.730124  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.730362  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.731104  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.734295  446736 addons.go:234] Setting addon default-storageclass=true in "no-preload-960512"
	W1030 19:46:57.734316  446736 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:46:57.734349  446736 host.go:66] Checking if "no-preload-960512" exists ...
	I1030 19:46:57.734742  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.734810  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.747185  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1030 19:46:57.747680  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.748340  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.748360  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.748795  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.749029  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.749722  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I1030 19:46:57.750318  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.754616  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I1030 19:46:57.754666  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.755024  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.755052  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.755555  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.755672  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757159  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.757166  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.757184  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.757504  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.757804  446736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:46:57.758045  446736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:46:57.758089  446736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:46:57.759001  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.759300  446736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:57.759313  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:46:57.759327  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.762134  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762557  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.762582  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.762740  446736 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:46:54.485910  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.981415  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:54.939168  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:56.940263  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:57.762828  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.763037  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.763192  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.763344  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.763936  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:46:57.763953  446736 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:46:57.763970  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.766410  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.766771  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.766795  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.767034  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.767212  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.767385  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.767522  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.776037  446736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1030 19:46:57.776386  446736 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:46:57.776846  446736 main.go:141] libmachine: Using API Version  1
	I1030 19:46:57.776864  446736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:46:57.777184  446736 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:46:57.777339  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetState
	I1030 19:46:57.778829  446736 main.go:141] libmachine: (no-preload-960512) Calling .DriverName
	I1030 19:46:57.779118  446736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:57.779138  446736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:46:57.779156  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHHostname
	I1030 19:46:57.781325  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781590  446736 main.go:141] libmachine: (no-preload-960512) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:5b:b2", ip: ""} in network mk-no-preload-960512: {Iface:virbr4 ExpiryTime:2024-10-30 20:46:17 +0000 UTC Type:0 Mac:52:54:00:71:5b:b2 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:no-preload-960512 Clientid:01:52:54:00:71:5b:b2}
	I1030 19:46:57.781615  446736 main.go:141] libmachine: (no-preload-960512) DBG | domain no-preload-960512 has defined IP address 192.168.72.132 and MAC address 52:54:00:71:5b:b2 in network mk-no-preload-960512
	I1030 19:46:57.781755  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHPort
	I1030 19:46:57.781895  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHKeyPath
	I1030 19:46:57.781995  446736 main.go:141] libmachine: (no-preload-960512) Calling .GetSSHUsername
	I1030 19:46:57.782088  446736 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/no-preload-960512/id_rsa Username:docker}
	I1030 19:46:57.895549  446736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:46:57.913030  446736 node_ready.go:35] waiting up to 6m0s for node "no-preload-960512" to be "Ready" ...
	I1030 19:46:58.008228  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:46:58.009206  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:46:58.009222  446736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:46:58.034347  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:46:58.036620  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:46:58.036646  446736 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:46:58.140489  446736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:58.140522  446736 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:46:58.181145  446736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:46:59.403246  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.368855241s)
	I1030 19:46:59.403317  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395049308s)
	I1030 19:46:59.403331  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403340  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403356  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403369  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403657  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403673  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403681  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403688  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403766  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403770  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.403778  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.403790  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.403796  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.403939  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.403954  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404023  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.404059  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.404071  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411114  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.411136  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.411365  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.411421  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.411437  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513065  446736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.33186887s)
	I1030 19:46:59.513150  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513168  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513455  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513481  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513486  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513491  446736 main.go:141] libmachine: Making call to close driver server
	I1030 19:46:59.513537  446736 main.go:141] libmachine: (no-preload-960512) Calling .Close
	I1030 19:46:59.513769  446736 main.go:141] libmachine: (no-preload-960512) DBG | Closing plugin on server side
	I1030 19:46:59.513797  446736 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:46:59.513809  446736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:46:59.513826  446736 addons.go:475] Verifying addon metrics-server=true in "no-preload-960512"
	I1030 19:46:59.516354  446736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:46:59.517886  446736 addons.go:510] duration metric: took 1.828322965s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:46:59.916839  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:46:57.143829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:57.644245  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.144327  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.644684  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.144712  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:59.644799  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.144222  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:00.644111  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.144268  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:01.644631  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:46:58.982694  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:00.984014  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:46:59.439638  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:01.939460  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:02.416750  446736 node_ready.go:53] node "no-preload-960512" has status "Ready":"False"
	I1030 19:47:03.416443  446736 node_ready.go:49] node "no-preload-960512" has status "Ready":"True"
	I1030 19:47:03.416469  446736 node_ready.go:38] duration metric: took 5.503404181s for node "no-preload-960512" to be "Ready" ...
	I1030 19:47:03.416479  446736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:47:03.422219  446736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:02.143881  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:02.644208  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.144411  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.643948  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.144028  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:04.644179  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.144791  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:05.643983  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.143859  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:06.644436  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:03.481239  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.481271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.482108  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:04.439288  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:06.439454  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:05.428589  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.430975  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:09.928214  446736 pod_ready.go:103] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:07.144765  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:07.644280  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.144381  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:08.644099  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.144129  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.643864  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.144105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:10.643752  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.144135  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:11.644172  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:09.982150  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.481265  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:08.939357  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.940087  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:10.430572  446736 pod_ready.go:93] pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.430598  446736 pod_ready.go:82] duration metric: took 7.008352985s for pod "coredns-7c65d6cfc9-6cdl4" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.430610  446736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436673  446736 pod_ready.go:93] pod "etcd-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.436699  446736 pod_ready.go:82] duration metric: took 6.082545ms for pod "etcd-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.436711  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442262  446736 pod_ready.go:93] pod "kube-apiserver-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.442282  446736 pod_ready.go:82] duration metric: took 5.563816ms for pod "kube-apiserver-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.442292  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446170  446736 pod_ready.go:93] pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.446189  446736 pod_ready.go:82] duration metric: took 3.890123ms for pod "kube-controller-manager-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.446198  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450190  446736 pod_ready.go:93] pod "kube-proxy-fxqqc" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.450216  446736 pod_ready.go:82] duration metric: took 4.011125ms for pod "kube-proxy-fxqqc" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.450226  446736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826537  446736 pod_ready.go:93] pod "kube-scheduler-no-preload-960512" in "kube-system" namespace has status "Ready":"True"
	I1030 19:47:10.826572  446736 pod_ready.go:82] duration metric: took 376.338504ms for pod "kube-scheduler-no-preload-960512" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:10.826587  446736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	I1030 19:47:12.834756  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:12.144391  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:12.644441  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.143916  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:13.644779  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.144680  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:14.644634  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.144050  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:15.644738  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:16.143957  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:16.144037  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:16.184282  447486 cri.go:89] found id: ""
	I1030 19:47:16.184310  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.184320  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:16.184327  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:16.184403  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:16.225359  447486 cri.go:89] found id: ""
	I1030 19:47:16.225388  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.225397  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:16.225403  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:16.225471  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:16.260591  447486 cri.go:89] found id: ""
	I1030 19:47:16.260625  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.260635  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:16.260641  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:16.260695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:16.299562  447486 cri.go:89] found id: ""
	I1030 19:47:16.299591  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.299602  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:16.299609  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:16.299682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:16.334753  447486 cri.go:89] found id: ""
	I1030 19:47:16.334781  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.334789  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:16.334795  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:16.334877  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:16.371588  447486 cri.go:89] found id: ""
	I1030 19:47:16.371619  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.371628  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:16.371634  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:16.371689  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:16.406668  447486 cri.go:89] found id: ""
	I1030 19:47:16.406699  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.406710  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:16.406718  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:16.406786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:16.443050  447486 cri.go:89] found id: ""
	I1030 19:47:16.443081  447486 logs.go:282] 0 containers: []
	W1030 19:47:16.443096  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:16.443109  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:16.443125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:16.492898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:16.492936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:16.506310  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:16.506343  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:16.637629  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:16.637660  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:16.637677  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:16.709581  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:16.709621  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:14.481660  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:16.981807  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:13.438777  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.439457  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.939606  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:15.335280  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:17.833216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.833320  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.253501  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:19.267200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:19.267276  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:19.303608  447486 cri.go:89] found id: ""
	I1030 19:47:19.303641  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.303651  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:19.303658  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:19.303711  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:19.341311  447486 cri.go:89] found id: ""
	I1030 19:47:19.341343  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.341354  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:19.341363  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:19.341427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:19.376949  447486 cri.go:89] found id: ""
	I1030 19:47:19.376977  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.376987  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:19.376996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:19.377075  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:19.414164  447486 cri.go:89] found id: ""
	I1030 19:47:19.414197  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.414209  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:19.414218  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:19.414308  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:19.450637  447486 cri.go:89] found id: ""
	I1030 19:47:19.450671  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.450683  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:19.450692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:19.450761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:19.485315  447486 cri.go:89] found id: ""
	I1030 19:47:19.485345  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.485355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:19.485364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:19.485427  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:19.519873  447486 cri.go:89] found id: ""
	I1030 19:47:19.519901  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.519911  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:19.519919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:19.519982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:19.555168  447486 cri.go:89] found id: ""
	I1030 19:47:19.555198  447486 logs.go:282] 0 containers: []
	W1030 19:47:19.555211  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:19.555223  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:19.555239  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:19.607227  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:19.607265  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:19.621465  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:19.621498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:19.700837  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:19.700869  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:19.700882  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:19.774428  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:19.774468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:18.982345  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:21.482165  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:19.940122  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.439405  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.333449  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.833942  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:22.314410  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:22.327998  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:22.328083  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:22.365583  447486 cri.go:89] found id: ""
	I1030 19:47:22.365611  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.365622  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:22.365631  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:22.365694  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:22.398964  447486 cri.go:89] found id: ""
	I1030 19:47:22.398996  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.399008  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:22.399016  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:22.399092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:22.435132  447486 cri.go:89] found id: ""
	I1030 19:47:22.435169  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.435181  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:22.435188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:22.435252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:22.471510  447486 cri.go:89] found id: ""
	I1030 19:47:22.471544  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.471557  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:22.471574  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:22.471630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:22.509611  447486 cri.go:89] found id: ""
	I1030 19:47:22.509639  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.509647  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:22.509653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:22.509707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:22.546502  447486 cri.go:89] found id: ""
	I1030 19:47:22.546539  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.546552  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:22.546560  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:22.546630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:22.584560  447486 cri.go:89] found id: ""
	I1030 19:47:22.584593  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.584605  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:22.584613  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:22.584676  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:22.621421  447486 cri.go:89] found id: ""
	I1030 19:47:22.621461  447486 logs.go:282] 0 containers: []
	W1030 19:47:22.621474  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:22.621486  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:22.621505  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:22.634998  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:22.635038  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:22.711002  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:22.711028  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:22.711047  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:22.790673  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:22.790712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:22.831804  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:22.831851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.386915  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:25.399854  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:25.399954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:25.438346  447486 cri.go:89] found id: ""
	I1030 19:47:25.438381  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.438406  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:25.438416  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:25.438500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:25.474888  447486 cri.go:89] found id: ""
	I1030 19:47:25.474915  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.474924  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:25.474931  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:25.474994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:25.511925  447486 cri.go:89] found id: ""
	I1030 19:47:25.511955  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.511966  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:25.511973  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:25.512038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:25.551027  447486 cri.go:89] found id: ""
	I1030 19:47:25.551058  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.551067  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:25.551073  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:25.551144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:25.584736  447486 cri.go:89] found id: ""
	I1030 19:47:25.584764  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.584773  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:25.584779  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:25.584847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:25.632765  447486 cri.go:89] found id: ""
	I1030 19:47:25.632798  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.632810  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:25.632818  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:25.632893  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:25.682501  447486 cri.go:89] found id: ""
	I1030 19:47:25.682528  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.682536  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:25.682543  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:25.682591  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:25.728306  447486 cri.go:89] found id: ""
	I1030 19:47:25.728340  447486 logs.go:282] 0 containers: []
	W1030 19:47:25.728352  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:25.728365  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:25.728397  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:25.781908  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:25.781944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:25.795864  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:25.795899  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:25.868350  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:25.868378  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:25.868392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:25.944244  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:25.944277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:23.981016  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:25.982186  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:24.942113  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.438568  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:27.333623  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.334460  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:28.488216  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:28.501481  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:28.501558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:28.536808  447486 cri.go:89] found id: ""
	I1030 19:47:28.536838  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.536849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:28.536857  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:28.536923  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:28.571819  447486 cri.go:89] found id: ""
	I1030 19:47:28.571855  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.571867  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:28.571885  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:28.571966  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:28.605532  447486 cri.go:89] found id: ""
	I1030 19:47:28.605571  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.605582  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:28.605610  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:28.605682  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:28.642108  447486 cri.go:89] found id: ""
	I1030 19:47:28.642140  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.642152  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:28.642159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:28.642234  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:28.680036  447486 cri.go:89] found id: ""
	I1030 19:47:28.680065  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.680078  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:28.680086  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:28.680151  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.716135  447486 cri.go:89] found id: ""
	I1030 19:47:28.716162  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.716171  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:28.716177  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:28.716238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:28.752364  447486 cri.go:89] found id: ""
	I1030 19:47:28.752398  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.752406  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:28.752413  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:28.752478  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:28.788396  447486 cri.go:89] found id: ""
	I1030 19:47:28.788434  447486 logs.go:282] 0 containers: []
	W1030 19:47:28.788447  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:28.788461  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:28.788476  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:28.841560  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:28.841595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:28.856134  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:28.856178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:28.930463  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:28.930507  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:28.930527  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:29.013743  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:29.013795  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:31.557942  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:31.573562  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:31.573654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:31.625349  447486 cri.go:89] found id: ""
	I1030 19:47:31.625378  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.625386  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:31.625392  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:31.625442  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:31.689536  447486 cri.go:89] found id: ""
	I1030 19:47:31.689566  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.689574  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:31.689581  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:31.689632  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:31.723758  447486 cri.go:89] found id: ""
	I1030 19:47:31.723794  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.723806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:31.723814  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:31.723890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:31.762671  447486 cri.go:89] found id: ""
	I1030 19:47:31.762698  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.762707  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:31.762713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:31.762761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:31.797658  447486 cri.go:89] found id: ""
	I1030 19:47:31.797686  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.797694  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:31.797702  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:31.797792  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:28.481158  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:30.981477  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:32.981593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:29.439072  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.940019  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.833540  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.334678  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:31.832186  447486 cri.go:89] found id: ""
	I1030 19:47:31.832217  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.832228  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:31.832236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:31.832298  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:31.866820  447486 cri.go:89] found id: ""
	I1030 19:47:31.866853  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.866866  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:31.866875  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:31.866937  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:31.901888  447486 cri.go:89] found id: ""
	I1030 19:47:31.901913  447486 logs.go:282] 0 containers: []
	W1030 19:47:31.901922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:31.901932  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:31.901944  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:31.992343  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:31.992380  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:32.030519  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:32.030559  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:32.084442  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:32.084478  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:32.098919  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:32.098954  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:32.171034  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:34.671243  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:34.685879  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:34.685972  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:34.720657  447486 cri.go:89] found id: ""
	I1030 19:47:34.720686  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.720694  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:34.720700  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:34.720757  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:34.759571  447486 cri.go:89] found id: ""
	I1030 19:47:34.759602  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.759615  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:34.759624  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:34.759685  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:34.795273  447486 cri.go:89] found id: ""
	I1030 19:47:34.795313  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.795322  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:34.795329  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:34.795450  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:34.828999  447486 cri.go:89] found id: ""
	I1030 19:47:34.829035  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.829047  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:34.829054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:34.829158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:34.865620  447486 cri.go:89] found id: ""
	I1030 19:47:34.865661  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.865674  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:34.865682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:34.865753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:34.900768  447486 cri.go:89] found id: ""
	I1030 19:47:34.900801  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.900812  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:34.900820  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:34.900889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:34.945023  447486 cri.go:89] found id: ""
	I1030 19:47:34.945048  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.945057  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:34.945063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:34.945118  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:34.980458  447486 cri.go:89] found id: ""
	I1030 19:47:34.980483  447486 logs.go:282] 0 containers: []
	W1030 19:47:34.980492  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:34.980501  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:34.980514  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:35.052570  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:35.052597  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:35.052613  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:35.133825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:35.133869  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:35.176016  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:35.176063  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:35.228866  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:35.228903  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:34.982702  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.481103  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:34.438712  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.938856  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:36.837275  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:39.332612  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:37.743408  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:37.757472  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:37.757547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:37.794818  447486 cri.go:89] found id: ""
	I1030 19:47:37.794847  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.794856  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:37.794862  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:37.794928  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:37.830025  447486 cri.go:89] found id: ""
	I1030 19:47:37.830064  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.830077  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:37.830086  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:37.830150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:37.864862  447486 cri.go:89] found id: ""
	I1030 19:47:37.864893  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.864902  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:37.864908  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:37.864958  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:37.901650  447486 cri.go:89] found id: ""
	I1030 19:47:37.901699  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.901713  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:37.901722  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:37.901780  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:37.935824  447486 cri.go:89] found id: ""
	I1030 19:47:37.935854  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.935862  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:37.935868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:37.935930  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:37.972774  447486 cri.go:89] found id: ""
	I1030 19:47:37.972805  447486 logs.go:282] 0 containers: []
	W1030 19:47:37.972813  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:37.972819  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:37.972868  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:38.007815  447486 cri.go:89] found id: ""
	I1030 19:47:38.007845  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.007856  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:38.007864  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:38.007931  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:38.042525  447486 cri.go:89] found id: ""
	I1030 19:47:38.042559  447486 logs.go:282] 0 containers: []
	W1030 19:47:38.042571  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:38.042584  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:38.042600  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:38.122022  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:38.122048  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:38.122065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:38.200534  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:38.200575  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:38.240118  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:38.240154  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:38.291936  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:38.291976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:40.806105  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:40.821268  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:40.821343  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:40.857151  447486 cri.go:89] found id: ""
	I1030 19:47:40.857186  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.857198  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:40.857207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:40.857266  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:40.893603  447486 cri.go:89] found id: ""
	I1030 19:47:40.893639  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.893648  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:40.893654  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:40.893720  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:40.935294  447486 cri.go:89] found id: ""
	I1030 19:47:40.935330  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.935342  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:40.935349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:40.935418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:40.971509  447486 cri.go:89] found id: ""
	I1030 19:47:40.971536  447486 logs.go:282] 0 containers: []
	W1030 19:47:40.971544  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:40.971550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:40.971610  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:41.009895  447486 cri.go:89] found id: ""
	I1030 19:47:41.009932  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.009941  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:41.009948  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:41.010008  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:41.045170  447486 cri.go:89] found id: ""
	I1030 19:47:41.045208  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.045221  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:41.045229  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:41.045288  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:41.077654  447486 cri.go:89] found id: ""
	I1030 19:47:41.077684  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.077695  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:41.077704  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:41.077771  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:41.111509  447486 cri.go:89] found id: ""
	I1030 19:47:41.111543  447486 logs.go:282] 0 containers: []
	W1030 19:47:41.111552  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:41.111562  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:41.111574  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:41.164939  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:41.164976  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:41.178512  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:41.178589  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:41.258783  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:41.258813  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:41.258832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:41.338192  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:41.338230  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:39.481210  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.481439  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:38.938987  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:40.941386  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:41.333705  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.833502  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.878155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:43.892376  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:43.892452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:43.930556  447486 cri.go:89] found id: ""
	I1030 19:47:43.930594  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.930606  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:43.930614  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:43.930679  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:43.970588  447486 cri.go:89] found id: ""
	I1030 19:47:43.970619  447486 logs.go:282] 0 containers: []
	W1030 19:47:43.970630  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:43.970638  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:43.970706  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:44.005467  447486 cri.go:89] found id: ""
	I1030 19:47:44.005497  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.005506  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:44.005512  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:44.005573  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:44.039126  447486 cri.go:89] found id: ""
	I1030 19:47:44.039164  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.039173  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:44.039179  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:44.039239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:44.072961  447486 cri.go:89] found id: ""
	I1030 19:47:44.072994  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.073006  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:44.073014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:44.073109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:44.105864  447486 cri.go:89] found id: ""
	I1030 19:47:44.105891  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.105900  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:44.105907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:44.105956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:44.138198  447486 cri.go:89] found id: ""
	I1030 19:47:44.138240  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.138250  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:44.138264  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:44.138331  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:44.172529  447486 cri.go:89] found id: ""
	I1030 19:47:44.172558  447486 logs.go:282] 0 containers: []
	W1030 19:47:44.172567  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:44.172577  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:44.172594  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:44.248215  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:44.248254  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:44.286169  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:44.286202  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:44.341129  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:44.341167  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:44.354570  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:44.354597  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:44.427790  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:43.481483  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.482271  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.981312  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:43.440759  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:45.938783  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:47.940512  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.332448  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:48.333216  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:46.928728  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:46.943068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:46.943154  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:46.978385  447486 cri.go:89] found id: ""
	I1030 19:47:46.978416  447486 logs.go:282] 0 containers: []
	W1030 19:47:46.978428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:46.978436  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:46.978522  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:47.020413  447486 cri.go:89] found id: ""
	I1030 19:47:47.020457  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.020469  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:47.020476  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:47.020547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:47.061492  447486 cri.go:89] found id: ""
	I1030 19:47:47.061526  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.061538  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:47.061547  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:47.061611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:47.097621  447486 cri.go:89] found id: ""
	I1030 19:47:47.097659  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.097670  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:47.097679  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:47.097739  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:47.131740  447486 cri.go:89] found id: ""
	I1030 19:47:47.131769  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.131779  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:47.131785  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:47.131856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:47.167623  447486 cri.go:89] found id: ""
	I1030 19:47:47.167661  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.167674  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:47.167682  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:47.167746  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:47.202299  447486 cri.go:89] found id: ""
	I1030 19:47:47.202328  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.202337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:47.202344  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:47.202401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:47.236652  447486 cri.go:89] found id: ""
	I1030 19:47:47.236686  447486 logs.go:282] 0 containers: []
	W1030 19:47:47.236695  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:47.236704  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:47.236716  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:47.289700  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:47.289740  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:47.304929  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:47.304964  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:47.374811  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:47.374842  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:47.374858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:47.449161  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:47.449196  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:49.989730  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:50.002741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:50.002821  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:50.037602  447486 cri.go:89] found id: ""
	I1030 19:47:50.037636  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.037647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:50.037655  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:50.037724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:50.071346  447486 cri.go:89] found id: ""
	I1030 19:47:50.071383  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.071395  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:50.071405  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:50.071473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:50.106657  447486 cri.go:89] found id: ""
	I1030 19:47:50.106698  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.106711  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:50.106719  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:50.106783  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:50.140974  447486 cri.go:89] found id: ""
	I1030 19:47:50.141012  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.141025  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:50.141032  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:50.141105  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:50.177715  447486 cri.go:89] found id: ""
	I1030 19:47:50.177748  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.177756  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:50.177763  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:50.177824  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:50.212234  447486 cri.go:89] found id: ""
	I1030 19:47:50.212263  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.212272  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:50.212278  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:50.212337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:50.250791  447486 cri.go:89] found id: ""
	I1030 19:47:50.250826  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.250835  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:50.250842  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:50.250908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:50.288575  447486 cri.go:89] found id: ""
	I1030 19:47:50.288607  447486 logs.go:282] 0 containers: []
	W1030 19:47:50.288615  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:50.288628  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:50.288643  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:50.358015  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:50.358039  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:50.358054  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:50.433194  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:50.433235  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:50.473485  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:50.473519  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:50.523581  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:50.523618  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:49.981614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:51.982079  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.439717  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.940170  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:50.333498  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:52.832848  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:54.833689  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:53.038393  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:53.052835  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:53.052910  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:53.088797  447486 cri.go:89] found id: ""
	I1030 19:47:53.088828  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.088837  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:53.088843  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:53.088897  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:53.124627  447486 cri.go:89] found id: ""
	I1030 19:47:53.124659  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.124668  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:53.124674  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:53.124724  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:53.159127  447486 cri.go:89] found id: ""
	I1030 19:47:53.159163  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.159175  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:53.159183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:53.159244  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:53.191770  447486 cri.go:89] found id: ""
	I1030 19:47:53.191801  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.191810  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:53.191817  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:53.191885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:53.227727  447486 cri.go:89] found id: ""
	I1030 19:47:53.227761  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.227774  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:53.227781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:53.227842  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:53.262937  447486 cri.go:89] found id: ""
	I1030 19:47:53.262969  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.262981  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:53.262989  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:53.263060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:53.296070  447486 cri.go:89] found id: ""
	I1030 19:47:53.296113  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.296124  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:53.296133  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:53.296197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:53.332628  447486 cri.go:89] found id: ""
	I1030 19:47:53.332663  447486 logs.go:282] 0 containers: []
	W1030 19:47:53.332674  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:53.332687  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:53.332702  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:53.385004  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:53.385046  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:53.400139  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:53.400185  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:53.477792  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:53.477826  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:53.477858  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:53.553145  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:53.553186  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:56.094454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:56.107827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:56.107900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:56.141701  447486 cri.go:89] found id: ""
	I1030 19:47:56.141739  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.141751  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:56.141763  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:56.141831  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:56.179973  447486 cri.go:89] found id: ""
	I1030 19:47:56.180003  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.180016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:56.180023  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:56.180099  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:56.220456  447486 cri.go:89] found id: ""
	I1030 19:47:56.220486  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.220496  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:56.220503  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:56.220578  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:56.259699  447486 cri.go:89] found id: ""
	I1030 19:47:56.259727  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.259736  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:56.259741  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:56.259791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:56.302726  447486 cri.go:89] found id: ""
	I1030 19:47:56.302762  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.302775  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:56.302783  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:56.302850  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:56.339791  447486 cri.go:89] found id: ""
	I1030 19:47:56.339819  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.339828  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:56.339834  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:56.339889  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:56.381291  447486 cri.go:89] found id: ""
	I1030 19:47:56.381325  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.381337  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:56.381345  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:56.381401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:56.417150  447486 cri.go:89] found id: ""
	I1030 19:47:56.417182  447486 logs.go:282] 0 containers: []
	W1030 19:47:56.417194  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:56.417207  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:56.417227  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:56.466963  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:56.467005  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:56.481528  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:56.481557  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:56.554843  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:56.554872  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:56.554887  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:56.635798  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:56.635846  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:54.480601  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:56.481475  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:55.439618  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.940438  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:57.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.337314  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:47:59.179829  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:47:59.193083  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:47:59.193160  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:47:59.231253  447486 cri.go:89] found id: ""
	I1030 19:47:59.231288  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.231302  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:47:59.231311  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:47:59.231382  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:47:59.265982  447486 cri.go:89] found id: ""
	I1030 19:47:59.266013  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.266022  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:47:59.266028  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:47:59.266090  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:47:59.303724  447486 cri.go:89] found id: ""
	I1030 19:47:59.303761  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.303773  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:47:59.303781  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:47:59.303848  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:47:59.342137  447486 cri.go:89] found id: ""
	I1030 19:47:59.342163  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.342172  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:47:59.342180  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:47:59.342246  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:47:59.382652  447486 cri.go:89] found id: ""
	I1030 19:47:59.382684  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.382693  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:47:59.382700  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:47:59.382761  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:47:59.422428  447486 cri.go:89] found id: ""
	I1030 19:47:59.422454  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.422463  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:47:59.422469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:47:59.422539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:47:59.464047  447486 cri.go:89] found id: ""
	I1030 19:47:59.464079  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.464089  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:47:59.464095  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:47:59.464146  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:47:59.500658  447486 cri.go:89] found id: ""
	I1030 19:47:59.500693  447486 logs.go:282] 0 containers: []
	W1030 19:47:59.500705  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:47:59.500716  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:47:59.500732  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:47:59.554634  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:47:59.554679  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:47:59.567956  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:47:59.567986  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:47:59.646305  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:47:59.646332  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:47:59.646349  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:47:59.730008  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:47:59.730052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:47:58.486516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.982184  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:00.439220  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.439945  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:01.832883  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:03.834027  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:02.274141  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:02.287246  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:02.287320  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:02.322166  447486 cri.go:89] found id: ""
	I1030 19:48:02.322320  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.322336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:02.322346  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:02.322421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:02.358101  447486 cri.go:89] found id: ""
	I1030 19:48:02.358131  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.358140  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:02.358146  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:02.358209  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:02.394812  447486 cri.go:89] found id: ""
	I1030 19:48:02.394898  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.394915  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:02.394924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:02.394990  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:02.429128  447486 cri.go:89] found id: ""
	I1030 19:48:02.429165  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.429177  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:02.429186  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:02.429358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:02.465878  447486 cri.go:89] found id: ""
	I1030 19:48:02.465907  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.465915  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:02.465921  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:02.465973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:02.502758  447486 cri.go:89] found id: ""
	I1030 19:48:02.502794  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.502805  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:02.502813  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:02.502879  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:02.540111  447486 cri.go:89] found id: ""
	I1030 19:48:02.540142  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.540152  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:02.540158  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:02.540222  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:02.574728  447486 cri.go:89] found id: ""
	I1030 19:48:02.574762  447486 logs.go:282] 0 containers: []
	W1030 19:48:02.574774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:02.574787  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:02.574804  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:02.613333  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:02.613374  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:02.664970  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:02.665013  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:02.679594  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:02.679626  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:02.744184  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:02.744208  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:02.744222  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.326826  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:05.340166  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:05.340232  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:05.376742  447486 cri.go:89] found id: ""
	I1030 19:48:05.376774  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.376789  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:05.376795  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:05.376865  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:05.413981  447486 cri.go:89] found id: ""
	I1030 19:48:05.414026  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.414039  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:05.414047  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:05.414121  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:05.449811  447486 cri.go:89] found id: ""
	I1030 19:48:05.449842  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.449854  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:05.449862  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:05.449925  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:05.502576  447486 cri.go:89] found id: ""
	I1030 19:48:05.502610  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.502622  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:05.502630  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:05.502721  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:05.536747  447486 cri.go:89] found id: ""
	I1030 19:48:05.536778  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.536787  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:05.536793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:05.536857  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:05.570308  447486 cri.go:89] found id: ""
	I1030 19:48:05.570335  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.570344  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:05.570353  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:05.570420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:05.605006  447486 cri.go:89] found id: ""
	I1030 19:48:05.605037  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.605048  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:05.605054  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:05.605109  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:05.638651  447486 cri.go:89] found id: ""
	I1030 19:48:05.638681  447486 logs.go:282] 0 containers: []
	W1030 19:48:05.638693  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:05.638705  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:05.638720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:05.690734  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:05.690769  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:05.704561  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:05.704588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:05.779426  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:05.779448  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:05.779471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:05.866320  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:05.866355  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:03.481614  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:05.482428  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.981875  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:04.939485  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:07.438925  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:06.334094  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.834525  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:08.409454  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:08.423687  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:08.423767  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:08.463554  447486 cri.go:89] found id: ""
	I1030 19:48:08.463581  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.463591  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:08.463597  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:08.463654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:08.500159  447486 cri.go:89] found id: ""
	I1030 19:48:08.500186  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.500195  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:08.500200  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:08.500253  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:08.535670  447486 cri.go:89] found id: ""
	I1030 19:48:08.535701  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.535710  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:08.535717  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:08.535785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:08.572921  447486 cri.go:89] found id: ""
	I1030 19:48:08.572958  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.572968  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:08.572975  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:08.573052  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:08.610873  447486 cri.go:89] found id: ""
	I1030 19:48:08.610908  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.610918  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:08.610924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:08.610978  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:08.645430  447486 cri.go:89] found id: ""
	I1030 19:48:08.645458  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.645466  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:08.645475  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:08.645528  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:08.681212  447486 cri.go:89] found id: ""
	I1030 19:48:08.681246  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.681258  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:08.681266  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:08.681332  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:08.716619  447486 cri.go:89] found id: ""
	I1030 19:48:08.716651  447486 logs.go:282] 0 containers: []
	W1030 19:48:08.716661  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:08.716671  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:08.716682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:08.794090  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:08.794134  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:08.833209  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:08.833251  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:08.884781  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:08.884817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:08.898556  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:08.898586  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:08.967713  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.468230  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:11.482593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:11.482660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:11.518191  447486 cri.go:89] found id: ""
	I1030 19:48:11.518225  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.518235  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:11.518242  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:11.518295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:11.557199  447486 cri.go:89] found id: ""
	I1030 19:48:11.557229  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.557237  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:11.557252  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:11.557323  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:11.595605  447486 cri.go:89] found id: ""
	I1030 19:48:11.595638  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.595650  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:11.595664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:11.595732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:11.634253  447486 cri.go:89] found id: ""
	I1030 19:48:11.634281  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.634295  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:11.634301  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:11.634358  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:11.671138  447486 cri.go:89] found id: ""
	I1030 19:48:11.671167  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.671176  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:11.671183  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:11.671238  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:11.707202  447486 cri.go:89] found id: ""
	I1030 19:48:11.707228  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.707237  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:11.707243  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:11.707302  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:11.745514  447486 cri.go:89] found id: ""
	I1030 19:48:11.745549  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.745561  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:11.745570  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:11.745640  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:11.781403  447486 cri.go:89] found id: ""
	I1030 19:48:11.781438  447486 logs.go:282] 0 containers: []
	W1030 19:48:11.781449  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:11.781458  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:11.781471  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:10.486349  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:12.980881  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:09.440261  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.938439  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.332911  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.334382  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:11.832934  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:11.832972  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:11.853498  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:11.853545  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:11.949365  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:11.949389  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:11.949405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:12.033776  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:12.033823  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.579536  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:14.593497  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:14.593579  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:14.627853  447486 cri.go:89] found id: ""
	I1030 19:48:14.627886  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.627895  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:14.627902  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:14.627953  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:14.662356  447486 cri.go:89] found id: ""
	I1030 19:48:14.662386  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.662398  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:14.662406  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:14.662481  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:14.699334  447486 cri.go:89] found id: ""
	I1030 19:48:14.699370  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.699382  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:14.699390  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:14.699457  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:14.733884  447486 cri.go:89] found id: ""
	I1030 19:48:14.733924  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.733937  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:14.733946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:14.734025  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:14.775208  447486 cri.go:89] found id: ""
	I1030 19:48:14.775240  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.775249  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:14.775256  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:14.775315  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:14.809663  447486 cri.go:89] found id: ""
	I1030 19:48:14.809695  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.809704  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:14.809711  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:14.809778  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:14.844963  447486 cri.go:89] found id: ""
	I1030 19:48:14.844996  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.845006  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:14.845014  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:14.845084  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:14.881236  447486 cri.go:89] found id: ""
	I1030 19:48:14.881273  447486 logs.go:282] 0 containers: []
	W1030 19:48:14.881283  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:14.881293  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:14.881305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:14.933792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:14.933830  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:14.948038  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:14.948065  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:15.023497  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:15.023519  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:15.023532  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:15.105682  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:15.105741  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:14.980949  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.981063  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:13.940399  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:16.438545  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:15.834158  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.332452  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:17.646238  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:17.665366  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:17.665455  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:17.707729  447486 cri.go:89] found id: ""
	I1030 19:48:17.707783  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.707796  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:17.707805  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:17.707883  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:17.759922  447486 cri.go:89] found id: ""
	I1030 19:48:17.759959  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.759972  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:17.759980  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:17.760049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:17.807635  447486 cri.go:89] found id: ""
	I1030 19:48:17.807671  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.807683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:17.807695  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:17.807770  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:17.844205  447486 cri.go:89] found id: ""
	I1030 19:48:17.844236  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.844247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:17.844255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:17.844326  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:17.879079  447486 cri.go:89] found id: ""
	I1030 19:48:17.879113  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.879125  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:17.879134  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:17.879202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:17.916548  447486 cri.go:89] found id: ""
	I1030 19:48:17.916584  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.916594  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:17.916601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:17.916654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:17.950597  447486 cri.go:89] found id: ""
	I1030 19:48:17.950626  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.950635  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:17.950640  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:17.950695  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:17.985924  447486 cri.go:89] found id: ""
	I1030 19:48:17.985957  447486 logs.go:282] 0 containers: []
	W1030 19:48:17.985968  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:17.985980  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:17.985996  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:18.066211  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:18.066250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:18.107228  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:18.107279  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:18.157508  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:18.157543  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.172208  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:18.172243  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:18.248100  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:20.748681  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:20.763369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:20.763445  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:20.804288  447486 cri.go:89] found id: ""
	I1030 19:48:20.804323  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.804336  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:20.804343  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:20.804410  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:20.838925  447486 cri.go:89] found id: ""
	I1030 19:48:20.838964  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.838973  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:20.838979  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:20.839030  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:20.873560  447486 cri.go:89] found id: ""
	I1030 19:48:20.873596  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.873608  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:20.873617  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:20.873681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:20.908670  447486 cri.go:89] found id: ""
	I1030 19:48:20.908705  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.908716  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:20.908723  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:20.908791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:20.945901  447486 cri.go:89] found id: ""
	I1030 19:48:20.945929  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.945937  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:20.945943  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:20.945991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:20.980184  447486 cri.go:89] found id: ""
	I1030 19:48:20.980216  447486 logs.go:282] 0 containers: []
	W1030 19:48:20.980227  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:20.980236  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:20.980299  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:21.024243  447486 cri.go:89] found id: ""
	I1030 19:48:21.024272  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.024284  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:21.024293  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:21.024366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:21.063315  447486 cri.go:89] found id: ""
	I1030 19:48:21.063348  447486 logs.go:282] 0 containers: []
	W1030 19:48:21.063358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:21.063370  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:21.063387  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:21.130434  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:21.130463  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:21.130480  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:21.209067  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:21.209107  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:21.251005  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:21.251035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:21.303365  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:21.303402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:18.981952  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.982372  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:18.439921  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.939869  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.940058  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:20.333700  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:22.833845  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.834560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:23.817700  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:23.831060  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:23.831133  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:23.864299  447486 cri.go:89] found id: ""
	I1030 19:48:23.864334  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.864346  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:23.864354  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:23.864420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:23.900815  447486 cri.go:89] found id: ""
	I1030 19:48:23.900844  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.900854  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:23.900869  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:23.900929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:23.939888  447486 cri.go:89] found id: ""
	I1030 19:48:23.939917  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.939928  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:23.939936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:23.939999  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:23.975359  447486 cri.go:89] found id: ""
	I1030 19:48:23.975387  447486 logs.go:282] 0 containers: []
	W1030 19:48:23.975395  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:23.975401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:23.975452  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:24.012779  447486 cri.go:89] found id: ""
	I1030 19:48:24.012819  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.012832  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:24.012840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:24.012908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:24.048853  447486 cri.go:89] found id: ""
	I1030 19:48:24.048890  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.048903  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:24.048912  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:24.048979  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:24.084744  447486 cri.go:89] found id: ""
	I1030 19:48:24.084784  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.084797  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:24.084806  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:24.084860  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:24.121719  447486 cri.go:89] found id: ""
	I1030 19:48:24.121757  447486 logs.go:282] 0 containers: []
	W1030 19:48:24.121767  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:24.121777  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:24.121791  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:24.178691  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:24.178733  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:24.192885  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:24.192916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:24.268771  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:24.268815  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:24.268832  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:24.349663  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:24.349699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:23.481516  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:25.481700  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.481886  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:24.940106  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.940309  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:27.334165  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.834162  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:26.887325  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:26.900480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:26.900558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:26.936157  447486 cri.go:89] found id: ""
	I1030 19:48:26.936188  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.936200  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:26.936207  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:26.936278  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:26.975580  447486 cri.go:89] found id: ""
	I1030 19:48:26.975615  447486 logs.go:282] 0 containers: []
	W1030 19:48:26.975626  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:26.975633  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:26.975705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:27.010549  447486 cri.go:89] found id: ""
	I1030 19:48:27.010579  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.010592  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:27.010600  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:27.010659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:27.047505  447486 cri.go:89] found id: ""
	I1030 19:48:27.047541  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.047553  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:27.047561  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:27.047628  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:27.083379  447486 cri.go:89] found id: ""
	I1030 19:48:27.083409  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.083420  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:27.083429  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:27.083492  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:27.117912  447486 cri.go:89] found id: ""
	I1030 19:48:27.117954  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.117967  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:27.117976  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:27.118049  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:27.151721  447486 cri.go:89] found id: ""
	I1030 19:48:27.151749  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.151758  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:27.151765  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:27.151817  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:27.188940  447486 cri.go:89] found id: ""
	I1030 19:48:27.188981  447486 logs.go:282] 0 containers: []
	W1030 19:48:27.188989  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:27.188999  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:27.189011  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:27.243926  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:27.243960  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:27.258702  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:27.258731  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:27.326983  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:27.327023  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:27.327041  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:27.410761  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:27.410808  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.953219  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:29.967972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:29.968078  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:30.003975  447486 cri.go:89] found id: ""
	I1030 19:48:30.004004  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.004014  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:30.004023  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:30.004097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:30.041732  447486 cri.go:89] found id: ""
	I1030 19:48:30.041768  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.041780  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:30.041787  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:30.041863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:30.078262  447486 cri.go:89] found id: ""
	I1030 19:48:30.078297  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.078308  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:30.078315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:30.078379  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:30.116100  447486 cri.go:89] found id: ""
	I1030 19:48:30.116137  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.116149  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:30.116157  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:30.116229  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:30.150925  447486 cri.go:89] found id: ""
	I1030 19:48:30.150953  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.150964  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:30.150972  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:30.151041  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:30.192188  447486 cri.go:89] found id: ""
	I1030 19:48:30.192219  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.192230  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:30.192237  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:30.192314  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:30.231144  447486 cri.go:89] found id: ""
	I1030 19:48:30.231180  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.231192  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:30.231200  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:30.231277  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:30.271198  447486 cri.go:89] found id: ""
	I1030 19:48:30.271228  447486 logs.go:282] 0 containers: []
	W1030 19:48:30.271242  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:30.271265  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:30.271277  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:30.322750  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:30.322792  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:30.337745  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:30.337774  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:30.417198  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:30.417224  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:30.417240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:30.503327  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:30.503364  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:29.982893  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.482051  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:29.440509  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:31.939517  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:32.333571  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.833482  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:33.047719  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:33.062330  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:33.062395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:33.101049  447486 cri.go:89] found id: ""
	I1030 19:48:33.101088  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.101101  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:33.101108  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:33.101175  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:33.135236  447486 cri.go:89] found id: ""
	I1030 19:48:33.135268  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.135279  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:33.135286  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:33.135357  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:33.169279  447486 cri.go:89] found id: ""
	I1030 19:48:33.169314  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.169325  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:33.169333  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:33.169401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:33.203336  447486 cri.go:89] found id: ""
	I1030 19:48:33.203380  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.203392  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:33.203401  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:33.203470  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:33.238223  447486 cri.go:89] found id: ""
	I1030 19:48:33.238258  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.238270  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:33.238279  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:33.238345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:33.272891  447486 cri.go:89] found id: ""
	I1030 19:48:33.272925  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.272937  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:33.272946  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:33.273014  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:33.312452  447486 cri.go:89] found id: ""
	I1030 19:48:33.312480  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.312489  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:33.312496  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:33.312547  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:33.349041  447486 cri.go:89] found id: ""
	I1030 19:48:33.349076  447486 logs.go:282] 0 containers: []
	W1030 19:48:33.349091  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:33.349104  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:33.349130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:33.430888  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:33.430940  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:33.469414  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:33.469444  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:33.518989  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:33.519022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:33.532656  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:33.532690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:33.605896  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.106207  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:36.120564  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:36.120646  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:36.156854  447486 cri.go:89] found id: ""
	I1030 19:48:36.156887  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.156900  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:36.156909  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:36.156988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:36.195027  447486 cri.go:89] found id: ""
	I1030 19:48:36.195059  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.195072  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:36.195080  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:36.195150  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:36.235639  447486 cri.go:89] found id: ""
	I1030 19:48:36.235672  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.235683  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:36.235692  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:36.235758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:36.281659  447486 cri.go:89] found id: ""
	I1030 19:48:36.281693  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.281702  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:36.281709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:36.281762  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:36.315427  447486 cri.go:89] found id: ""
	I1030 19:48:36.315454  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.315463  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:36.315469  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:36.315531  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:36.353084  447486 cri.go:89] found id: ""
	I1030 19:48:36.353110  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.353120  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:36.353126  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:36.353197  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:36.388497  447486 cri.go:89] found id: ""
	I1030 19:48:36.388533  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.388545  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:36.388553  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:36.388616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:36.423625  447486 cri.go:89] found id: ""
	I1030 19:48:36.423658  447486 logs.go:282] 0 containers: []
	W1030 19:48:36.423667  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:36.423676  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:36.423691  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:36.476722  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:36.476757  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:36.490669  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:36.490700  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:36.558587  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:36.558621  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:36.558639  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:36.635606  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:36.635654  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:34.482414  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.981552  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:34.439796  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:36.938335  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:37.333231  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.333707  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:39.174007  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:39.187709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:39.187786  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:39.226131  447486 cri.go:89] found id: ""
	I1030 19:48:39.226165  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.226177  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:39.226185  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:39.226265  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:39.265963  447486 cri.go:89] found id: ""
	I1030 19:48:39.266003  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.266016  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:39.266024  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:39.266092  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:39.302586  447486 cri.go:89] found id: ""
	I1030 19:48:39.302624  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.302637  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:39.302645  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:39.302710  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:39.347869  447486 cri.go:89] found id: ""
	I1030 19:48:39.347903  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.347916  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:39.347924  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:39.347994  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:39.384252  447486 cri.go:89] found id: ""
	I1030 19:48:39.384280  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.384288  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:39.384294  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:39.384347  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:39.418847  447486 cri.go:89] found id: ""
	I1030 19:48:39.418876  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.418885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:39.418891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:39.418950  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:39.458408  447486 cri.go:89] found id: ""
	I1030 19:48:39.458454  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.458467  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:39.458480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:39.458567  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:39.493889  447486 cri.go:89] found id: ""
	I1030 19:48:39.493923  447486 logs.go:282] 0 containers: []
	W1030 19:48:39.493934  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:39.493946  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:39.493959  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:39.548692  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:39.548746  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:39.562083  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:39.562110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:39.633822  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:39.633845  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:39.633857  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:39.711765  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:39.711814  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:39.482010  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.981380  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:38.939254  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:40.940318  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:41.832456  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.832780  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:42.254337  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:42.268137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:42.268202  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:42.303383  447486 cri.go:89] found id: ""
	I1030 19:48:42.303418  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.303428  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:42.303434  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:42.303501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:42.349405  447486 cri.go:89] found id: ""
	I1030 19:48:42.349437  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.349447  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:42.349453  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:42.349504  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:42.384317  447486 cri.go:89] found id: ""
	I1030 19:48:42.384353  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.384363  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:42.384369  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:42.384424  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:42.418712  447486 cri.go:89] found id: ""
	I1030 19:48:42.418759  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.418768  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:42.418775  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:42.418833  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:42.454234  447486 cri.go:89] found id: ""
	I1030 19:48:42.454270  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.454280  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:42.454288  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:42.454362  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:42.488813  447486 cri.go:89] found id: ""
	I1030 19:48:42.488845  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.488855  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:42.488863  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:42.488929  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:42.525883  447486 cri.go:89] found id: ""
	I1030 19:48:42.525917  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.525929  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:42.525938  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:42.526006  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:42.561197  447486 cri.go:89] found id: ""
	I1030 19:48:42.561233  447486 logs.go:282] 0 containers: []
	W1030 19:48:42.561246  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:42.561259  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:42.561275  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:42.599818  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:42.599854  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:42.654341  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:42.654382  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:42.668163  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:42.668188  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:42.739630  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:42.739659  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:42.739671  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.316154  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:45.330372  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:45.330454  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:45.369093  447486 cri.go:89] found id: ""
	I1030 19:48:45.369125  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.369135  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:45.369141  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:45.369192  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:45.407681  447486 cri.go:89] found id: ""
	I1030 19:48:45.407715  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.407726  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:45.407732  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:45.407787  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:45.444445  447486 cri.go:89] found id: ""
	I1030 19:48:45.444474  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.444482  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:45.444488  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:45.444539  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:45.481538  447486 cri.go:89] found id: ""
	I1030 19:48:45.481570  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.481583  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:45.481591  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:45.481654  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:45.515088  447486 cri.go:89] found id: ""
	I1030 19:48:45.515123  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.515132  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:45.515139  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:45.515195  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:45.550085  447486 cri.go:89] found id: ""
	I1030 19:48:45.550133  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.550145  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:45.550152  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:45.550214  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:45.583950  447486 cri.go:89] found id: ""
	I1030 19:48:45.583985  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.583999  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:45.584008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:45.584082  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:45.617320  447486 cri.go:89] found id: ""
	I1030 19:48:45.617349  447486 logs.go:282] 0 containers: []
	W1030 19:48:45.617358  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:45.617369  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:45.617389  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:45.668792  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:45.668833  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:45.683144  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:45.683178  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:45.758707  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:45.758732  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:45.758744  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:45.833807  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:45.833837  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:43.982806  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:46.480452  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:43.440702  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.938267  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:47.938396  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:45.833319  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.332420  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:48.374096  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:48.387812  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:48.387903  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:48.426958  447486 cri.go:89] found id: ""
	I1030 19:48:48.426987  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.426996  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:48.427002  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:48.427051  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:48.462216  447486 cri.go:89] found id: ""
	I1030 19:48:48.462249  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.462260  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:48.462268  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:48.462336  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:48.495666  447486 cri.go:89] found id: ""
	I1030 19:48:48.495699  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.495709  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:48.495716  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:48.495798  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:48.530653  447486 cri.go:89] found id: ""
	I1030 19:48:48.530686  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.530698  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:48.530709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:48.530777  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:48.564788  447486 cri.go:89] found id: ""
	I1030 19:48:48.564826  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.564838  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:48.564846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:48.564921  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:48.600735  447486 cri.go:89] found id: ""
	I1030 19:48:48.600772  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.600784  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:48.600793  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:48.600863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:48.637063  447486 cri.go:89] found id: ""
	I1030 19:48:48.637095  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.637107  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:48.637115  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:48.637182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:48.673279  447486 cri.go:89] found id: ""
	I1030 19:48:48.673314  447486 logs.go:282] 0 containers: []
	W1030 19:48:48.673334  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:48.673347  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:48.673362  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:48.724239  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:48.724280  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:48.738390  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:48.738425  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:48.812130  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:48.812155  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:48.812171  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:48.896253  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:48.896298  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.441155  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:51.454675  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:51.454751  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:51.490464  447486 cri.go:89] found id: ""
	I1030 19:48:51.490511  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.490523  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:51.490532  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:51.490600  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:51.525364  447486 cri.go:89] found id: ""
	I1030 19:48:51.525399  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.525411  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:51.525419  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:51.525485  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:51.559028  447486 cri.go:89] found id: ""
	I1030 19:48:51.559062  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.559071  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:51.559078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:51.559139  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:51.595188  447486 cri.go:89] found id: ""
	I1030 19:48:51.595217  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.595225  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:51.595231  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:51.595300  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:51.628987  447486 cri.go:89] found id: ""
	I1030 19:48:51.629023  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.629039  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:51.629047  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:51.629119  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:51.663257  447486 cri.go:89] found id: ""
	I1030 19:48:51.663286  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.663295  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:51.663303  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:51.663368  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:51.712562  447486 cri.go:89] found id: ""
	I1030 19:48:51.712600  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.712613  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:51.712622  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:51.712684  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:51.761730  447486 cri.go:89] found id: ""
	I1030 19:48:51.761760  447486 logs.go:282] 0 containers: []
	W1030 19:48:51.761769  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:51.761779  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:51.761794  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:51.775595  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:51.775624  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:48:48.481851  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.980723  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.982177  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:49.939273  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:51.939972  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:50.333451  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:52.333773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:54.835087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:48:51.849120  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:51.849144  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:51.849157  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:51.931364  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:51.931403  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:51.971195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:51.971229  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:54.525136  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:54.539137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:54.539227  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:54.574281  447486 cri.go:89] found id: ""
	I1030 19:48:54.574316  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.574339  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:54.574348  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:54.574420  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:54.611109  447486 cri.go:89] found id: ""
	I1030 19:48:54.611149  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.611161  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:54.611170  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:54.611230  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:54.648396  447486 cri.go:89] found id: ""
	I1030 19:48:54.648428  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.648439  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:54.648447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:54.648510  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:54.683834  447486 cri.go:89] found id: ""
	I1030 19:48:54.683871  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.683884  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:54.683892  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:54.683954  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:54.717391  447486 cri.go:89] found id: ""
	I1030 19:48:54.717421  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.717430  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:54.717436  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:54.717495  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:54.753783  447486 cri.go:89] found id: ""
	I1030 19:48:54.753812  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.753821  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:54.753827  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:54.753878  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:54.788231  447486 cri.go:89] found id: ""
	I1030 19:48:54.788270  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.788282  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:54.788291  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:54.788359  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:54.823949  447486 cri.go:89] found id: ""
	I1030 19:48:54.823989  447486 logs.go:282] 0 containers: []
	W1030 19:48:54.824001  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:54.824014  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:54.824052  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:54.838936  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:54.838967  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:54.911785  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:54.911812  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:54.911825  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:54.993268  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:54.993302  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:55.032557  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:55.032588  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:55.481330  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.482183  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:53.940343  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:56.439870  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.333262  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:59.333560  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:57.588726  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:48:57.603010  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:48:57.603085  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:48:57.636499  447486 cri.go:89] found id: ""
	I1030 19:48:57.636531  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.636542  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:48:57.636551  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:48:57.636624  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:48:57.671698  447486 cri.go:89] found id: ""
	I1030 19:48:57.671728  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.671739  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:48:57.671748  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:48:57.671815  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:48:57.707387  447486 cri.go:89] found id: ""
	I1030 19:48:57.707414  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.707422  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:48:57.707431  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:48:57.707482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:48:57.745404  447486 cri.go:89] found id: ""
	I1030 19:48:57.745432  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.745440  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:48:57.745447  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:48:57.745507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:48:57.784874  447486 cri.go:89] found id: ""
	I1030 19:48:57.784903  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.784912  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:48:57.784919  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:48:57.784984  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:48:57.824663  447486 cri.go:89] found id: ""
	I1030 19:48:57.824697  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.824707  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:48:57.824713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:48:57.824773  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:48:57.862542  447486 cri.go:89] found id: ""
	I1030 19:48:57.862581  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.862593  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:48:57.862601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:48:57.862669  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:48:57.897901  447486 cri.go:89] found id: ""
	I1030 19:48:57.897935  447486 logs.go:282] 0 containers: []
	W1030 19:48:57.897947  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:48:57.897959  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:48:57.897974  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:48:57.951898  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:48:57.951936  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:48:57.966282  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:48:57.966327  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:48:58.035515  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:48:58.035546  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:48:58.035562  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:48:58.114825  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:48:58.114876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:00.705537  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:00.719589  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:00.719672  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:00.762299  447486 cri.go:89] found id: ""
	I1030 19:49:00.762330  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.762338  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:00.762356  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:00.762438  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:00.802228  447486 cri.go:89] found id: ""
	I1030 19:49:00.802259  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.802268  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:00.802275  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:00.802345  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:00.836531  447486 cri.go:89] found id: ""
	I1030 19:49:00.836557  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.836565  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:00.836572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:00.836630  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:00.869332  447486 cri.go:89] found id: ""
	I1030 19:49:00.869360  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.869369  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:00.869375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:00.869437  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:00.904643  447486 cri.go:89] found id: ""
	I1030 19:49:00.904675  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.904684  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:00.904691  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:00.904768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:00.939020  447486 cri.go:89] found id: ""
	I1030 19:49:00.939050  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.939061  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:00.939068  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:00.939142  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:00.974586  447486 cri.go:89] found id: ""
	I1030 19:49:00.974625  447486 logs.go:282] 0 containers: []
	W1030 19:49:00.974638  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:00.974646  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:00.974707  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:01.009337  447486 cri.go:89] found id: ""
	I1030 19:49:01.009375  447486 logs.go:282] 0 containers: []
	W1030 19:49:01.009386  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:01.009399  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:01.009416  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:01.067087  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:01.067125  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:01.081681  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:01.081713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:01.153057  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:01.153082  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:01.153096  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:01.236113  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:01.236153  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:48:59.981252  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.981799  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:48:58.938430  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:00.940905  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:01.333854  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.334325  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.774056  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:03.788395  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:03.788482  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:03.823847  447486 cri.go:89] found id: ""
	I1030 19:49:03.823880  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.823892  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:03.823900  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:03.823973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:03.864776  447486 cri.go:89] found id: ""
	I1030 19:49:03.864807  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.864819  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:03.864827  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:03.864890  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:03.912516  447486 cri.go:89] found id: ""
	I1030 19:49:03.912572  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.912585  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:03.912593  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:03.912660  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:03.962459  447486 cri.go:89] found id: ""
	I1030 19:49:03.962509  447486 logs.go:282] 0 containers: []
	W1030 19:49:03.962521  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:03.962530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:03.962602  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:04.019107  447486 cri.go:89] found id: ""
	I1030 19:49:04.019143  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.019152  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:04.019159  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:04.019217  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:04.054016  447486 cri.go:89] found id: ""
	I1030 19:49:04.054047  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.054056  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:04.054063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:04.054140  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:04.089907  447486 cri.go:89] found id: ""
	I1030 19:49:04.089938  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.089948  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:04.089955  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:04.090007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:04.128081  447486 cri.go:89] found id: ""
	I1030 19:49:04.128110  447486 logs.go:282] 0 containers: []
	W1030 19:49:04.128118  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:04.128128  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:04.128142  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:04.182419  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:04.182462  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:04.196909  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:04.196941  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:04.267267  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:04.267298  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:04.267317  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:04.346826  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:04.346876  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:03.984259  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.481362  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:03.438786  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.938707  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.939642  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:05.334541  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:07.834233  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:06.887266  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:06.902462  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:06.902554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:06.938850  447486 cri.go:89] found id: ""
	I1030 19:49:06.938880  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.938891  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:06.938899  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:06.938961  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:06.983284  447486 cri.go:89] found id: ""
	I1030 19:49:06.983315  447486 logs.go:282] 0 containers: []
	W1030 19:49:06.983330  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:06.983339  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:06.983406  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:07.016332  447486 cri.go:89] found id: ""
	I1030 19:49:07.016359  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.016369  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:07.016375  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:07.016428  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:07.051425  447486 cri.go:89] found id: ""
	I1030 19:49:07.051459  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.051471  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:07.051480  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:07.051550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:07.083396  447486 cri.go:89] found id: ""
	I1030 19:49:07.083429  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.083437  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:07.083444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:07.083507  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:07.116616  447486 cri.go:89] found id: ""
	I1030 19:49:07.116646  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.116654  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:07.116661  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:07.116728  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:07.149219  447486 cri.go:89] found id: ""
	I1030 19:49:07.149251  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.149259  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:07.149265  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:07.149318  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:07.188404  447486 cri.go:89] found id: ""
	I1030 19:49:07.188435  447486 logs.go:282] 0 containers: []
	W1030 19:49:07.188444  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:07.188454  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:07.188468  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:07.247600  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:07.247640  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:07.262196  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:07.262231  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:07.332998  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:07.333031  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:07.333048  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:07.415322  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:07.415367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:09.958278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:09.972983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:09.973068  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:10.016768  447486 cri.go:89] found id: ""
	I1030 19:49:10.016801  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.016810  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:10.016818  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:10.016885  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:10.052958  447486 cri.go:89] found id: ""
	I1030 19:49:10.052992  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.053002  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:10.053009  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:10.053063  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:10.089062  447486 cri.go:89] found id: ""
	I1030 19:49:10.089094  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.089105  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:10.089120  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:10.089196  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:10.126084  447486 cri.go:89] found id: ""
	I1030 19:49:10.126114  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.126123  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:10.126130  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:10.126182  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:10.171670  447486 cri.go:89] found id: ""
	I1030 19:49:10.171702  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.171712  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:10.171720  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:10.171785  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:10.210243  447486 cri.go:89] found id: ""
	I1030 19:49:10.210285  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.210293  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:10.210300  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:10.210366  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:10.253012  447486 cri.go:89] found id: ""
	I1030 19:49:10.253056  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.253069  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:10.253078  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:10.253155  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:10.287948  447486 cri.go:89] found id: ""
	I1030 19:49:10.287999  447486 logs.go:282] 0 containers: []
	W1030 19:49:10.288009  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:10.288021  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:10.288036  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:10.341362  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:10.341405  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:10.355769  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:10.355798  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:10.429469  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:10.429500  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:10.429518  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:10.509812  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:10.509851  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:08.488059  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.981606  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.982128  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.438903  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.939592  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:10.334087  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:12.336238  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:14.833365  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:13.053064  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:13.069063  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:13.069136  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:13.108457  447486 cri.go:89] found id: ""
	I1030 19:49:13.108492  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.108505  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:13.108513  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:13.108582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:13.146481  447486 cri.go:89] found id: ""
	I1030 19:49:13.146523  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.146534  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:13.146542  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:13.146595  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:13.187088  447486 cri.go:89] found id: ""
	I1030 19:49:13.187118  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.187129  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:13.187137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:13.187200  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:13.226913  447486 cri.go:89] found id: ""
	I1030 19:49:13.226948  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.226960  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:13.226968  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:13.227038  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:13.262632  447486 cri.go:89] found id: ""
	I1030 19:49:13.262661  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.262669  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:13.262676  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:13.262726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:13.296877  447486 cri.go:89] found id: ""
	I1030 19:49:13.296906  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.296915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:13.296922  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:13.296983  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:13.334907  447486 cri.go:89] found id: ""
	I1030 19:49:13.334939  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.334949  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:13.334956  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:13.335021  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:13.369386  447486 cri.go:89] found id: ""
	I1030 19:49:13.369430  447486 logs.go:282] 0 containers: []
	W1030 19:49:13.369443  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:13.369456  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:13.369472  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:13.423095  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:13.423130  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:13.437039  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:13.437067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:13.512619  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:13.512648  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:13.512663  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:13.596982  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:13.597023  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:16.135623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:16.150407  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:16.150502  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:16.188771  447486 cri.go:89] found id: ""
	I1030 19:49:16.188811  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.188823  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:16.188832  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:16.188907  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:16.221554  447486 cri.go:89] found id: ""
	I1030 19:49:16.221589  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.221598  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:16.221604  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:16.221655  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:16.255567  447486 cri.go:89] found id: ""
	I1030 19:49:16.255595  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.255609  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:16.255616  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:16.255667  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:16.289820  447486 cri.go:89] found id: ""
	I1030 19:49:16.289855  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.289866  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:16.289874  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:16.289935  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:16.324415  447486 cri.go:89] found id: ""
	I1030 19:49:16.324449  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.324464  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:16.324471  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:16.324533  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:16.360789  447486 cri.go:89] found id: ""
	I1030 19:49:16.360825  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.360848  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:16.360856  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:16.360922  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:16.395066  447486 cri.go:89] found id: ""
	I1030 19:49:16.395093  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.395101  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:16.395107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:16.395158  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:16.429220  447486 cri.go:89] found id: ""
	I1030 19:49:16.429261  447486 logs.go:282] 0 containers: []
	W1030 19:49:16.429273  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:16.429286  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:16.429305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:16.481209  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:16.481250  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:16.495353  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:16.495383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:16.563979  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:16.564006  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:16.564022  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:16.645166  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:16.645205  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:15.481438  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.482846  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:15.440389  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:17.938724  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:16.833433  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.335773  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.185478  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:19.199270  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:19.199337  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:19.242426  447486 cri.go:89] found id: ""
	I1030 19:49:19.242455  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.242464  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:19.242474  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:19.242556  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:19.284061  447486 cri.go:89] found id: ""
	I1030 19:49:19.284092  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.284102  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:19.284108  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:19.284178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:19.317373  447486 cri.go:89] found id: ""
	I1030 19:49:19.317407  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.317420  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:19.317428  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:19.317491  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:19.354222  447486 cri.go:89] found id: ""
	I1030 19:49:19.354250  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.354259  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:19.354267  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:19.354329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:19.392948  447486 cri.go:89] found id: ""
	I1030 19:49:19.392980  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.392989  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:19.392996  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:19.393053  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:19.438023  447486 cri.go:89] found id: ""
	I1030 19:49:19.438055  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.438066  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:19.438074  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:19.438144  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:19.472179  447486 cri.go:89] found id: ""
	I1030 19:49:19.472208  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.472218  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:19.472226  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:19.472283  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:19.507164  447486 cri.go:89] found id: ""
	I1030 19:49:19.507195  447486 logs.go:282] 0 containers: []
	W1030 19:49:19.507203  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:19.507213  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:19.507226  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:19.520898  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:19.520935  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:19.592204  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:19.592234  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:19.592263  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:19.668994  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:19.669045  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:19.707208  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:19.707240  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:19.981085  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.981344  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:19.939994  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.439696  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:21.833592  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.333379  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:22.263035  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:22.276999  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:22.277089  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:22.310969  447486 cri.go:89] found id: ""
	I1030 19:49:22.311006  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.311017  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:22.311026  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:22.311097  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:22.346282  447486 cri.go:89] found id: ""
	I1030 19:49:22.346311  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.346324  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:22.346332  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:22.346401  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:22.384324  447486 cri.go:89] found id: ""
	I1030 19:49:22.384354  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.384372  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:22.384381  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:22.384441  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:22.419465  447486 cri.go:89] found id: ""
	I1030 19:49:22.419498  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.419509  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:22.419518  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:22.419582  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:22.456161  447486 cri.go:89] found id: ""
	I1030 19:49:22.456196  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.456204  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:22.456211  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:22.456280  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:22.489075  447486 cri.go:89] found id: ""
	I1030 19:49:22.489102  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.489110  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:22.489119  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:22.489181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:22.521752  447486 cri.go:89] found id: ""
	I1030 19:49:22.521780  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.521789  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:22.521796  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:22.521847  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:22.554946  447486 cri.go:89] found id: ""
	I1030 19:49:22.554985  447486 logs.go:282] 0 containers: []
	W1030 19:49:22.554997  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:22.555010  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:22.555025  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:22.567877  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:22.567909  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:22.640062  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:22.640094  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:22.640110  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:22.714946  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:22.714985  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:22.755560  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:22.755595  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.306379  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:25.320883  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:25.320963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:25.356737  447486 cri.go:89] found id: ""
	I1030 19:49:25.356771  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.356782  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:25.356791  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:25.356856  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:25.393371  447486 cri.go:89] found id: ""
	I1030 19:49:25.393409  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.393420  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:25.393429  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:25.393500  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:25.428379  447486 cri.go:89] found id: ""
	I1030 19:49:25.428411  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.428425  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:25.428433  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:25.428505  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:25.473516  447486 cri.go:89] found id: ""
	I1030 19:49:25.473551  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.473562  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:25.473572  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:25.473649  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:25.512508  447486 cri.go:89] found id: ""
	I1030 19:49:25.512535  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.512544  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:25.512550  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:25.512611  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:25.547646  447486 cri.go:89] found id: ""
	I1030 19:49:25.547691  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.547705  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:25.547713  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:25.547782  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:25.582314  447486 cri.go:89] found id: ""
	I1030 19:49:25.582347  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.582356  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:25.582364  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:25.582415  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:25.617305  447486 cri.go:89] found id: ""
	I1030 19:49:25.617343  447486 logs.go:282] 0 containers: []
	W1030 19:49:25.617354  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:25.617367  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:25.617383  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:25.658245  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:25.658283  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:25.710559  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:25.710598  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:25.724961  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:25.724995  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:25.796252  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:25.796283  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:25.796300  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:23.984899  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:25.985999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:24.939599  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:27.440032  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:26.334407  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.334588  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:28.374633  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:28.389468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:28.389549  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:28.425747  447486 cri.go:89] found id: ""
	I1030 19:49:28.425780  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.425792  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:28.425800  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:28.425956  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:28.465221  447486 cri.go:89] found id: ""
	I1030 19:49:28.465258  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.465291  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:28.465303  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:28.465371  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:28.504184  447486 cri.go:89] found id: ""
	I1030 19:49:28.504217  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.504230  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:28.504240  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:28.504295  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:28.536198  447486 cri.go:89] found id: ""
	I1030 19:49:28.536234  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.536247  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:28.536255  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:28.536340  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:28.572194  447486 cri.go:89] found id: ""
	I1030 19:49:28.572228  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.572240  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:28.572248  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:28.572312  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:28.608794  447486 cri.go:89] found id: ""
	I1030 19:49:28.608826  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.608838  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:28.608846  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:28.608914  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:28.641664  447486 cri.go:89] found id: ""
	I1030 19:49:28.641698  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.641706  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:28.641714  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:28.641768  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:28.675756  447486 cri.go:89] found id: ""
	I1030 19:49:28.675790  447486 logs.go:282] 0 containers: []
	W1030 19:49:28.675800  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:28.675812  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:28.675829  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:28.690203  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:28.690237  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:28.755647  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:28.755674  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:28.755690  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:28.837116  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:28.837149  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:28.877195  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:28.877232  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.428091  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:31.442537  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:31.442619  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:31.479911  447486 cri.go:89] found id: ""
	I1030 19:49:31.479942  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.479953  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:31.479961  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:31.480029  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:31.517015  447486 cri.go:89] found id: ""
	I1030 19:49:31.517042  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.517050  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:31.517056  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:31.517107  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:31.549858  447486 cri.go:89] found id: ""
	I1030 19:49:31.549891  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.549900  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:31.549907  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:31.549971  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:31.583490  447486 cri.go:89] found id: ""
	I1030 19:49:31.583524  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.583536  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:31.583551  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:31.583618  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:31.618270  447486 cri.go:89] found id: ""
	I1030 19:49:31.618308  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.618320  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:31.618328  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:31.618397  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:31.655416  447486 cri.go:89] found id: ""
	I1030 19:49:31.655448  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.655460  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:31.655468  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:31.655530  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:31.689708  447486 cri.go:89] found id: ""
	I1030 19:49:31.689740  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.689751  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:31.689759  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:31.689823  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:31.724179  447486 cri.go:89] found id: ""
	I1030 19:49:31.724208  447486 logs.go:282] 0 containers: []
	W1030 19:49:31.724219  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:31.724233  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:31.724249  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:31.774900  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:31.774939  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:31.788606  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:31.788635  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:28.481673  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.980999  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:32.982429  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:29.938506  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:31.940276  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:30.834322  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:33.333091  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:49:31.861360  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:31.861385  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:31.861398  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:31.935856  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:31.935896  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.477313  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:34.491530  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:34.491597  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:34.525105  447486 cri.go:89] found id: ""
	I1030 19:49:34.525136  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.525145  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:34.525153  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:34.525215  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:34.560449  447486 cri.go:89] found id: ""
	I1030 19:49:34.560483  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.560495  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:34.560503  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:34.560558  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:34.595278  447486 cri.go:89] found id: ""
	I1030 19:49:34.595325  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.595335  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:34.595342  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:34.595395  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:34.628486  447486 cri.go:89] found id: ""
	I1030 19:49:34.628521  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.628533  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:34.628542  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:34.628614  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:34.663410  447486 cri.go:89] found id: ""
	I1030 19:49:34.663438  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.663448  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:34.663456  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:34.663520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:34.697053  447486 cri.go:89] found id: ""
	I1030 19:49:34.697086  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.697099  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:34.697107  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:34.697178  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:34.730910  447486 cri.go:89] found id: ""
	I1030 19:49:34.730943  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.730955  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:34.730963  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:34.731034  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:34.765725  447486 cri.go:89] found id: ""
	I1030 19:49:34.765762  447486 logs.go:282] 0 containers: []
	W1030 19:49:34.765774  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:34.765786  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:34.765807  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:34.802750  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:34.802786  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:34.853576  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:34.853614  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:34.868102  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:34.868139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:34.939985  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:34.940015  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:34.940027  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:35.480658  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.481068  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:34.442576  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:36.940088  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:35.333400  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.334425  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.833330  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:37.516479  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:37.529386  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:37.529453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:37.565889  447486 cri.go:89] found id: ""
	I1030 19:49:37.565923  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.565936  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:37.565945  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:37.566007  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:37.598771  447486 cri.go:89] found id: ""
	I1030 19:49:37.598801  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.598811  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:37.598817  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:37.598869  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:37.632678  447486 cri.go:89] found id: ""
	I1030 19:49:37.632705  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.632714  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:37.632735  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:37.632795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:37.666642  447486 cri.go:89] found id: ""
	I1030 19:49:37.666673  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.666682  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:37.666688  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:37.666748  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:37.701203  447486 cri.go:89] found id: ""
	I1030 19:49:37.701233  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.701242  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:37.701249  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:37.701324  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:37.735614  447486 cri.go:89] found id: ""
	I1030 19:49:37.735649  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.735661  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:37.735669  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:37.735738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:37.771381  447486 cri.go:89] found id: ""
	I1030 19:49:37.771418  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.771430  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:37.771439  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:37.771501  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:37.807870  447486 cri.go:89] found id: ""
	I1030 19:49:37.807908  447486 logs.go:282] 0 containers: []
	W1030 19:49:37.807922  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:37.807935  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:37.807952  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:37.860334  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:37.860367  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:37.874340  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:37.874371  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:37.952874  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:37.952903  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:37.952916  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:38.045318  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:38.045356  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:40.591278  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:40.604970  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:40.605050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:40.639839  447486 cri.go:89] found id: ""
	I1030 19:49:40.639869  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.639880  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:40.639889  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:40.639952  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:40.674046  447486 cri.go:89] found id: ""
	I1030 19:49:40.674077  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.674087  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:40.674093  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:40.674164  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:40.710759  447486 cri.go:89] found id: ""
	I1030 19:49:40.710794  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.710806  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:40.710815  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:40.710880  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:40.752439  447486 cri.go:89] found id: ""
	I1030 19:49:40.752471  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.752484  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:40.752493  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:40.752548  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:40.787985  447486 cri.go:89] found id: ""
	I1030 19:49:40.788021  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.788034  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:40.788042  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:40.788102  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:40.829282  447486 cri.go:89] found id: ""
	I1030 19:49:40.829320  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.829333  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:40.829341  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:40.829409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:40.863911  447486 cri.go:89] found id: ""
	I1030 19:49:40.863944  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.863953  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:40.863959  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:40.864026  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:40.901239  447486 cri.go:89] found id: ""
	I1030 19:49:40.901275  447486 logs.go:282] 0 containers: []
	W1030 19:49:40.901287  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:40.901300  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:40.901321  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:40.955283  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:40.955323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:40.968733  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:40.968766  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:41.040213  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:41.040242  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:41.040256  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:41.125992  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:41.126035  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:39.481593  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.483403  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:39.441009  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.939182  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:41.834082  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:44.332428  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.667949  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:43.681633  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:43.681705  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:43.725038  447486 cri.go:89] found id: ""
	I1030 19:49:43.725076  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.725085  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:43.725091  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:43.725149  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.761438  447486 cri.go:89] found id: ""
	I1030 19:49:43.761473  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.761486  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:43.761494  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:43.761566  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:43.795299  447486 cri.go:89] found id: ""
	I1030 19:49:43.795335  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.795347  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:43.795355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:43.795431  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:43.830545  447486 cri.go:89] found id: ""
	I1030 19:49:43.830582  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.830594  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:43.830601  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:43.830670  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:43.867632  447486 cri.go:89] found id: ""
	I1030 19:49:43.867664  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.867676  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:43.867684  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:43.867753  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:43.901315  447486 cri.go:89] found id: ""
	I1030 19:49:43.901346  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.901355  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:43.901361  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:43.901412  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:43.934928  447486 cri.go:89] found id: ""
	I1030 19:49:43.934963  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.934975  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:43.934983  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:43.935048  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:43.975407  447486 cri.go:89] found id: ""
	I1030 19:49:43.975441  447486 logs.go:282] 0 containers: []
	W1030 19:49:43.975451  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:43.975472  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:43.975497  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:44.019281  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:44.019310  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:44.072363  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:44.072402  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:44.085508  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:44.085538  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:44.159634  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:44.159666  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:44.159682  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:46.739662  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:46.753190  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:46.753252  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:46.790167  447486 cri.go:89] found id: ""
	I1030 19:49:46.790202  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.790211  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:46.790217  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:46.790272  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:43.988689  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.481139  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:43.939246  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.438847  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.333066  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.335463  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:46.828187  447486 cri.go:89] found id: ""
	I1030 19:49:46.828221  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.828230  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:46.828237  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:46.828305  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:46.865499  447486 cri.go:89] found id: ""
	I1030 19:49:46.865539  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.865551  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:46.865559  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:46.865612  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:46.899591  447486 cri.go:89] found id: ""
	I1030 19:49:46.899616  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.899625  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:46.899632  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:46.899681  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:46.934818  447486 cri.go:89] found id: ""
	I1030 19:49:46.934850  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.934860  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:46.934868  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:46.934933  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:46.971298  447486 cri.go:89] found id: ""
	I1030 19:49:46.971328  447486 logs.go:282] 0 containers: []
	W1030 19:49:46.971340  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:46.971349  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:46.971418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:47.010783  447486 cri.go:89] found id: ""
	I1030 19:49:47.010814  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.010825  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:47.010832  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:47.010896  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:47.044343  447486 cri.go:89] found id: ""
	I1030 19:49:47.044380  447486 logs.go:282] 0 containers: []
	W1030 19:49:47.044392  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:47.044405  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:47.044421  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:47.094425  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:47.094459  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:47.110339  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:47.110368  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:47.183262  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:47.183290  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:47.183305  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:47.262611  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:47.262651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:49.808195  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:49.821889  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:49.821963  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:49.857296  447486 cri.go:89] found id: ""
	I1030 19:49:49.857339  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.857351  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:49.857359  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:49.857413  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:49.892614  447486 cri.go:89] found id: ""
	I1030 19:49:49.892648  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.892660  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:49.892668  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:49.892732  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:49.929835  447486 cri.go:89] found id: ""
	I1030 19:49:49.929862  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.929871  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:49.929878  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:49.929940  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:49.965341  447486 cri.go:89] found id: ""
	I1030 19:49:49.965371  447486 logs.go:282] 0 containers: []
	W1030 19:49:49.965379  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:49.965392  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:49.965449  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:50.000134  447486 cri.go:89] found id: ""
	I1030 19:49:50.000165  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.000177  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:50.000188  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:50.000259  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:50.033848  447486 cri.go:89] found id: ""
	I1030 19:49:50.033876  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.033885  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:50.033891  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:50.033943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:50.073315  447486 cri.go:89] found id: ""
	I1030 19:49:50.073344  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.073354  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:50.073360  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:50.073421  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:50.114232  447486 cri.go:89] found id: ""
	I1030 19:49:50.114266  447486 logs.go:282] 0 containers: []
	W1030 19:49:50.114277  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:50.114290  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:50.114311  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:50.185407  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:50.185434  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:50.185448  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:50.270447  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:50.270494  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:50.308825  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:50.308855  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:50.363376  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:50.363417  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:48.982027  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:51.482972  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:48.439801  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.939120  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:50.833062  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.833132  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.834352  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:52.878475  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:52.892013  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:52.892088  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:52.928085  447486 cri.go:89] found id: ""
	I1030 19:49:52.928117  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.928126  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:52.928132  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:52.928185  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:52.963377  447486 cri.go:89] found id: ""
	I1030 19:49:52.963413  447486 logs.go:282] 0 containers: []
	W1030 19:49:52.963426  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:52.963434  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:52.963493  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:53.000799  447486 cri.go:89] found id: ""
	I1030 19:49:53.000825  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.000834  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:53.000840  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:53.000912  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:53.037429  447486 cri.go:89] found id: ""
	I1030 19:49:53.037463  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.037472  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:53.037478  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:53.037534  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:53.072392  447486 cri.go:89] found id: ""
	I1030 19:49:53.072425  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.072433  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:53.072446  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:53.072520  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:53.108925  447486 cri.go:89] found id: ""
	I1030 19:49:53.108957  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.108970  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:53.108978  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:53.109050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:53.145409  447486 cri.go:89] found id: ""
	I1030 19:49:53.145445  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.145457  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:53.145466  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:53.145536  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:53.180756  447486 cri.go:89] found id: ""
	I1030 19:49:53.180784  447486 logs.go:282] 0 containers: []
	W1030 19:49:53.180793  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:53.180803  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:53.180817  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:53.234960  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:53.235010  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:53.249224  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:53.249255  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:53.313223  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:53.313245  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:53.313264  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:53.399715  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:53.399758  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.944332  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:55.961546  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:55.961616  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:56.020603  447486 cri.go:89] found id: ""
	I1030 19:49:56.020634  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.020647  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:56.020654  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:56.020725  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:56.065134  447486 cri.go:89] found id: ""
	I1030 19:49:56.065162  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.065170  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:56.065176  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:56.065239  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:56.101358  447486 cri.go:89] found id: ""
	I1030 19:49:56.101386  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.101396  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:56.101405  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:56.101473  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:56.135762  447486 cri.go:89] found id: ""
	I1030 19:49:56.135795  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.135805  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:56.135811  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:56.135863  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:56.171336  447486 cri.go:89] found id: ""
	I1030 19:49:56.171371  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.171383  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:56.171391  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:56.171461  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:56.205643  447486 cri.go:89] found id: ""
	I1030 19:49:56.205674  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.205685  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:56.205693  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:56.205759  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:56.240853  447486 cri.go:89] found id: ""
	I1030 19:49:56.240885  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.240894  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:56.240901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:56.240973  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:56.276577  447486 cri.go:89] found id: ""
	I1030 19:49:56.276612  447486 logs.go:282] 0 containers: []
	W1030 19:49:56.276623  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:56.276636  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:56.276651  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:56.328180  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:56.328220  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:56.341895  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:56.341923  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:56.414492  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:56.414523  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:56.414540  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:56.498439  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:56.498498  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:53.980916  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:55.983077  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:53.439070  446887 pod_ready.go:103] pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:54.940107  446887 pod_ready.go:82] duration metric: took 4m0.007533629s for pod "metrics-server-6867b74b74-t85rd" in "kube-system" namespace to be "Ready" ...
	E1030 19:49:54.940137  446887 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:49:54.940149  446887 pod_ready.go:39] duration metric: took 4m6.552777198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:49:54.940170  446887 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:49:54.940206  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:54.940264  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:54.992682  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:54.992715  446887 cri.go:89] found id: ""
	I1030 19:49:54.992727  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:54.992790  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:54.997251  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:54.997313  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:55.034504  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.034542  446887 cri.go:89] found id: ""
	I1030 19:49:55.034552  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:55.034616  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.039551  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:55.039624  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:55.083294  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.083326  446887 cri.go:89] found id: ""
	I1030 19:49:55.083336  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:55.083407  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.087866  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:55.087932  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:55.125250  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.125353  446887 cri.go:89] found id: ""
	I1030 19:49:55.125372  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:55.125446  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.130688  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:55.130747  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:55.168792  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.168814  446887 cri.go:89] found id: ""
	I1030 19:49:55.168822  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:55.168877  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.173360  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:55.173424  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:55.209566  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.209590  446887 cri.go:89] found id: ""
	I1030 19:49:55.209599  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:55.209659  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.214190  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:55.214263  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:55.257056  446887 cri.go:89] found id: ""
	I1030 19:49:55.257091  446887 logs.go:282] 0 containers: []
	W1030 19:49:55.257103  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:55.257111  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:55.257165  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:55.300194  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.300224  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.300229  446887 cri.go:89] found id: ""
	I1030 19:49:55.300238  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:55.300290  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.304750  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:55.309249  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:49:55.309276  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:55.363959  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:49:55.363994  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:55.412667  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:49:55.412703  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:55.455381  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:55.455420  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:55.494657  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:55.494689  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:55.552740  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:55.552773  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:55.627724  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:55.627765  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:55.642263  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:49:55.642300  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:55.691079  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:55.691111  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:55.730111  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:49:55.730151  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:55.785155  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:55.785189  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:49:55.924592  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:55.924633  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:55.970229  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:55.970267  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:57.333378  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.334394  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.039071  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.053648  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.053722  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.097620  447486 cri.go:89] found id: ""
	I1030 19:49:59.097650  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.097661  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:49:59.097669  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.097738  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.139136  447486 cri.go:89] found id: ""
	I1030 19:49:59.139176  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.139188  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:49:59.139199  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.139270  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.180322  447486 cri.go:89] found id: ""
	I1030 19:49:59.180361  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.180371  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:49:59.180384  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.180453  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.217374  447486 cri.go:89] found id: ""
	I1030 19:49:59.217422  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.217434  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:49:59.217443  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.217498  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.257857  447486 cri.go:89] found id: ""
	I1030 19:49:59.257884  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.257894  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:49:59.257901  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.257968  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.297679  447486 cri.go:89] found id: ""
	I1030 19:49:59.297713  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.297724  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:49:59.297733  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.297795  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.341469  447486 cri.go:89] found id: ""
	I1030 19:49:59.341499  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.341509  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.341517  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:49:59.341587  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:49:59.381677  447486 cri.go:89] found id: ""
	I1030 19:49:59.381704  447486 logs.go:282] 0 containers: []
	W1030 19:49:59.381713  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:49:59.381723  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.381735  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.441396  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.441428  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.457105  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:49:59.457139  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:49:59.532023  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:49:59.532051  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.532064  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:49:59.621685  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:49:59.621720  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:49:58.481425  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:00.481912  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.482130  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:49:59.010542  446887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:49:59.027463  446887 api_server.go:72] duration metric: took 4m17.923507495s to wait for apiserver process to appear ...
	I1030 19:49:59.027488  446887 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:49:59.027524  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:49:59.027571  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:49:59.066364  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:49:59.066391  446887 cri.go:89] found id: ""
	I1030 19:49:59.066401  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:49:59.066463  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.072454  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:49:59.072535  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:49:59.118043  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:49:59.118072  446887 cri.go:89] found id: ""
	I1030 19:49:59.118081  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:49:59.118142  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.122806  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:49:59.122883  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:49:59.167475  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:49:59.167500  446887 cri.go:89] found id: ""
	I1030 19:49:59.167511  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:49:59.167577  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.172181  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:49:59.172255  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:49:59.210384  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:49:59.210411  446887 cri.go:89] found id: ""
	I1030 19:49:59.210419  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:49:59.210473  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.216032  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:49:59.216114  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:49:59.269770  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.269791  446887 cri.go:89] found id: ""
	I1030 19:49:59.269799  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:49:59.269851  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.274161  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:49:59.274239  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:49:59.313907  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.313936  446887 cri.go:89] found id: ""
	I1030 19:49:59.313946  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:49:59.314019  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.320687  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:49:59.320766  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:49:59.367710  446887 cri.go:89] found id: ""
	I1030 19:49:59.367740  446887 logs.go:282] 0 containers: []
	W1030 19:49:59.367752  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:49:59.367759  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:49:59.367826  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:49:59.422716  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.422744  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.422750  446887 cri.go:89] found id: ""
	I1030 19:49:59.422763  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:49:59.422827  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.428399  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:49:59.432404  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:49:59.432429  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:49:59.475798  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:49:59.475839  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:49:59.548960  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:49:59.548998  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:49:59.566839  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:49:59.566870  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:49:59.606181  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:49:59.606210  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:49:59.670134  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:49:59.670170  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:49:59.709224  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:49:59.709253  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:00.132147  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:00.132194  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:00.181124  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:00.181171  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:00.306545  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:00.306585  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:00.352129  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:00.352169  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:00.398083  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:00.398119  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:00.439813  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:00.439851  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:02.978477  446887 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8444/healthz ...
	I1030 19:50:02.983776  446887 api_server.go:279] https://192.168.39.92:8444/healthz returned 200:
	ok
	I1030 19:50:02.984791  446887 api_server.go:141] control plane version: v1.31.2
	I1030 19:50:02.984814  446887 api_server.go:131] duration metric: took 3.957319689s to wait for apiserver health ...
	I1030 19:50:02.984822  446887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:50:02.984844  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.984902  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:03.024715  446887 cri.go:89] found id: "549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:03.024745  446887 cri.go:89] found id: ""
	I1030 19:50:03.024754  446887 logs.go:282] 1 containers: [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f]
	I1030 19:50:03.024820  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.029121  446887 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:03.029188  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:03.064462  446887 cri.go:89] found id: "a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:03.064489  446887 cri.go:89] found id: ""
	I1030 19:50:03.064500  446887 logs.go:282] 1 containers: [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5]
	I1030 19:50:03.064564  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.068587  446887 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:03.068665  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:03.106880  446887 cri.go:89] found id: "87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.106902  446887 cri.go:89] found id: ""
	I1030 19:50:03.106910  446887 logs.go:282] 1 containers: [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2]
	I1030 19:50:03.106978  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.111313  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:03.111388  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:03.155761  446887 cri.go:89] found id: "0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:03.155791  446887 cri.go:89] found id: ""
	I1030 19:50:03.155801  446887 logs.go:282] 1 containers: [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf]
	I1030 19:50:03.155864  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.160616  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:03.160686  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:03.199028  446887 cri.go:89] found id: "2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:03.199063  446887 cri.go:89] found id: ""
	I1030 19:50:03.199074  446887 logs.go:282] 1 containers: [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6]
	I1030 19:50:03.199149  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.203348  446887 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:03.203414  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:03.257739  446887 cri.go:89] found id: "ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:03.257769  446887 cri.go:89] found id: ""
	I1030 19:50:03.257780  446887 logs.go:282] 1 containers: [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34]
	I1030 19:50:03.257845  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.263357  446887 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:03.263417  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:03.309752  446887 cri.go:89] found id: ""
	I1030 19:50:03.309779  446887 logs.go:282] 0 containers: []
	W1030 19:50:03.309787  446887 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:03.309793  446887 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:50:03.309843  446887 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:50:03.351570  446887 cri.go:89] found id: "60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.351593  446887 cri.go:89] found id: "8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.351597  446887 cri.go:89] found id: ""
	I1030 19:50:03.351605  446887 logs.go:282] 2 containers: [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6]
	I1030 19:50:03.351656  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.364414  446887 ssh_runner.go:195] Run: which crictl
	I1030 19:50:03.369070  446887 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:03.369097  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:03.385129  446887 logs.go:123] Gathering logs for kube-apiserver [549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f] ...
	I1030 19:50:03.385161  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549c7d9c0a8b587ac1ec1a617d6952961121f540d8c897bec4c5a994c324c60f"
	I1030 19:50:01.833117  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:04.334645  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:02.170623  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:02.184885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:02.184975  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:02.223811  447486 cri.go:89] found id: ""
	I1030 19:50:02.223841  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.223849  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:02.223856  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:02.223908  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:02.260454  447486 cri.go:89] found id: ""
	I1030 19:50:02.260481  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.260491  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:02.260497  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:02.260554  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:02.296542  447486 cri.go:89] found id: ""
	I1030 19:50:02.296569  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.296577  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:02.296583  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:02.296631  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:02.332168  447486 cri.go:89] found id: ""
	I1030 19:50:02.332199  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.332211  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:02.332219  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:02.332287  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:02.366539  447486 cri.go:89] found id: ""
	I1030 19:50:02.366575  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.366586  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:02.366595  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:02.366659  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:02.401859  447486 cri.go:89] found id: ""
	I1030 19:50:02.401894  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.401915  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:02.401923  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:02.401991  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:02.446061  447486 cri.go:89] found id: ""
	I1030 19:50:02.446097  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.446108  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:02.446116  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:02.446181  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:02.488233  447486 cri.go:89] found id: ""
	I1030 19:50:02.488257  447486 logs.go:282] 0 containers: []
	W1030 19:50:02.488265  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:02.488274  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:02.488294  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:02.544517  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:02.544554  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:02.558143  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:02.558179  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:02.628679  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:02.628706  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:02.628723  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:02.710246  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:02.710293  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.254846  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:05.269536  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:05.269599  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:05.303724  447486 cri.go:89] found id: ""
	I1030 19:50:05.303753  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.303761  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:05.303767  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:05.303819  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:05.339268  447486 cri.go:89] found id: ""
	I1030 19:50:05.339301  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.339322  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:05.339330  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:05.339405  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:05.375892  447486 cri.go:89] found id: ""
	I1030 19:50:05.375923  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.375930  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:05.375936  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:05.375988  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:05.413197  447486 cri.go:89] found id: ""
	I1030 19:50:05.413232  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.413243  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:05.413252  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:05.413329  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:05.452095  447486 cri.go:89] found id: ""
	I1030 19:50:05.452122  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.452130  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:05.452137  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:05.452193  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:05.490694  447486 cri.go:89] found id: ""
	I1030 19:50:05.490731  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.490744  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:05.490753  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:05.490808  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:05.523961  447486 cri.go:89] found id: ""
	I1030 19:50:05.523992  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.524001  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:05.524008  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:05.524060  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:05.558631  447486 cri.go:89] found id: ""
	I1030 19:50:05.558664  447486 logs.go:282] 0 containers: []
	W1030 19:50:05.558673  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:05.558684  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:05.558699  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:05.596929  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:05.596958  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:05.647294  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:05.647332  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:05.661349  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:05.661377  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:05.730268  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:05.730299  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:05.730323  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.434675  446887 logs.go:123] Gathering logs for coredns [87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2] ...
	I1030 19:50:03.434708  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87e42814a8c59cdc3912b2d5cc5ac6f60085b8a83588404ffa48c022f47381e2"
	I1030 19:50:03.474767  446887 logs.go:123] Gathering logs for storage-provisioner [8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6] ...
	I1030 19:50:03.474803  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb328b44b95ed798f0590c7410d815414680d6126b118c75fbb779e21720fd6"
	I1030 19:50:03.510301  446887 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:03.510331  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:03.887871  446887 logs.go:123] Gathering logs for storage-provisioner [60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034] ...
	I1030 19:50:03.887912  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f936bfa2bb366c2191e998f826203ef8aa68d3bb59bb111f16c94a88524034"
	I1030 19:50:03.930529  446887 logs.go:123] Gathering logs for container status ...
	I1030 19:50:03.930563  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:03.971064  446887 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:03.971102  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:04.040593  446887 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:04.040632  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:04.157377  446887 logs.go:123] Gathering logs for etcd [a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5] ...
	I1030 19:50:04.157418  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c527b45070ac60216c46e851f2cb8f6a5398db4bedcb34297e21a6d593b2e5"
	I1030 19:50:04.205779  446887 logs.go:123] Gathering logs for kube-scheduler [0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf] ...
	I1030 19:50:04.205816  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b3881e5bd4429dc1198b838d59bb8a6fe7d5bba1dfcc2d9b64f3766d94658cf"
	I1030 19:50:04.251434  446887 logs.go:123] Gathering logs for kube-proxy [2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6] ...
	I1030 19:50:04.251470  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce5d5edb001872daf62a1a7790765a46693f231f60739341dbfeb27084a57e6"
	I1030 19:50:04.288713  446887 logs.go:123] Gathering logs for kube-controller-manager [ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34] ...
	I1030 19:50:04.288747  446887 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef19f5c9edef46304b3c65600a599692dc210e0a2a2d3df5ec9873dfbd716f34"
	I1030 19:50:06.849298  446887 system_pods.go:59] 8 kube-system pods found
	I1030 19:50:06.849329  446887 system_pods.go:61] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.849334  446887 system_pods.go:61] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.849340  446887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.849352  446887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.849358  446887 system_pods.go:61] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.849367  446887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.849373  446887 system_pods.go:61] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.849377  446887 system_pods.go:61] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.849384  446887 system_pods.go:74] duration metric: took 3.864557334s to wait for pod list to return data ...
	I1030 19:50:06.849394  446887 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:50:06.852015  446887 default_sa.go:45] found service account: "default"
	I1030 19:50:06.852037  446887 default_sa.go:55] duration metric: took 2.63686ms for default service account to be created ...
	I1030 19:50:06.852046  446887 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:50:06.856920  446887 system_pods.go:86] 8 kube-system pods found
	I1030 19:50:06.856945  446887 system_pods.go:89] "coredns-7c65d6cfc9-9w8m8" [d285a845-e17e-4b87-837b-167dbd29f090] Running
	I1030 19:50:06.856953  446887 system_pods.go:89] "etcd-default-k8s-diff-port-768989" [400eedbb-ea1d-47a7-b342-ad29c8851552] Running
	I1030 19:50:06.856959  446887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-768989" [40389878-99e8-4ae1-899b-a61fc3b7100b] Running
	I1030 19:50:06.856966  446887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-768989" [4d4ca5b4-d36d-4f69-9712-b7544c4c9a43] Running
	I1030 19:50:06.856972  446887 system_pods.go:89] "kube-proxy-tsr5q" [60ad5830-1638-4209-a2b3-ef78b1df3d34] Running
	I1030 19:50:06.856979  446887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-768989" [af4c921c-9ea2-4bf3-84c2-8748c3c210bb] Running
	I1030 19:50:06.856996  446887 system_pods.go:89] "metrics-server-6867b74b74-t85rd" [8e162c99-2a94-4340-abe9-f1b312980444] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:50:06.857005  446887 system_pods.go:89] "storage-provisioner" [76805df9-1fbf-468d-a909-3b7c4f889e11] Running
	I1030 19:50:06.857015  446887 system_pods.go:126] duration metric: took 4.962745ms to wait for k8s-apps to be running ...
	I1030 19:50:06.857025  446887 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:50:06.857086  446887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:06.874176  446887 system_svc.go:56] duration metric: took 17.144628ms WaitForService to wait for kubelet
	I1030 19:50:06.874206  446887 kubeadm.go:582] duration metric: took 4m25.770253397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:50:06.874230  446887 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:50:06.876962  446887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:50:06.876987  446887 node_conditions.go:123] node cpu capacity is 2
	I1030 19:50:06.877004  446887 node_conditions.go:105] duration metric: took 2.768174ms to run NodePressure ...
	I1030 19:50:06.877025  446887 start.go:241] waiting for startup goroutines ...
	I1030 19:50:06.877034  446887 start.go:246] waiting for cluster config update ...
	I1030 19:50:06.877070  446887 start.go:255] writing updated cluster config ...
	I1030 19:50:06.877355  446887 ssh_runner.go:195] Run: rm -f paused
	I1030 19:50:06.927147  446887 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:50:06.929103  446887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-768989" cluster and "default" namespace by default
	I1030 19:50:04.981923  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.982630  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:06.834029  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.834616  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:08.312167  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:08.327121  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:08.327206  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:08.364871  447486 cri.go:89] found id: ""
	I1030 19:50:08.364905  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.364916  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:08.364924  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:08.364982  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:08.399179  447486 cri.go:89] found id: ""
	I1030 19:50:08.399215  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.399225  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:08.399231  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:08.399286  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:08.434308  447486 cri.go:89] found id: ""
	I1030 19:50:08.434340  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.434350  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:08.434356  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:08.434409  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:08.477152  447486 cri.go:89] found id: ""
	I1030 19:50:08.477184  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.477193  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:08.477204  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:08.477274  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:08.513678  447486 cri.go:89] found id: ""
	I1030 19:50:08.513706  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.513716  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:08.513725  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:08.513789  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:08.551427  447486 cri.go:89] found id: ""
	I1030 19:50:08.551459  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.551478  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:08.551485  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:08.551550  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:08.584224  447486 cri.go:89] found id: ""
	I1030 19:50:08.584260  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.584272  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:08.584282  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:08.584351  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:08.617603  447486 cri.go:89] found id: ""
	I1030 19:50:08.617638  447486 logs.go:282] 0 containers: []
	W1030 19:50:08.617649  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:08.617660  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:08.617674  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:08.694201  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:08.694229  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:08.694247  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:08.775457  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:08.775500  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:08.816452  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:08.816496  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:08.868077  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:08.868114  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.383130  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:11.397672  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:11.397758  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:11.431923  447486 cri.go:89] found id: ""
	I1030 19:50:11.431959  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.431971  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:11.431980  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:11.432050  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:11.466959  447486 cri.go:89] found id: ""
	I1030 19:50:11.466996  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.467009  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:11.467018  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:11.467093  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:11.506399  447486 cri.go:89] found id: ""
	I1030 19:50:11.506425  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.506437  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:11.506444  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:11.506529  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:11.538606  447486 cri.go:89] found id: ""
	I1030 19:50:11.538635  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.538643  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:11.538649  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:11.538700  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:11.573265  447486 cri.go:89] found id: ""
	I1030 19:50:11.573296  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.573304  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:11.573310  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:11.573364  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:11.608522  447486 cri.go:89] found id: ""
	I1030 19:50:11.608549  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.608558  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:11.608569  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:11.608629  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:11.639758  447486 cri.go:89] found id: ""
	I1030 19:50:11.639784  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.639792  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:11.639797  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:11.639846  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:11.673381  447486 cri.go:89] found id: ""
	I1030 19:50:11.673414  447486 logs.go:282] 0 containers: []
	W1030 19:50:11.673426  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:11.673439  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:11.673454  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:11.727368  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:11.727414  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:11.741267  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:11.741301  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:50:09.481159  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.483339  446965 pod_ready.go:103] pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:11.334468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:13.832615  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	W1030 19:50:11.808126  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:11.808158  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:11.808174  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:11.888676  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:11.888713  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.431637  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:14.445315  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:50:14.445392  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:50:14.482059  447486 cri.go:89] found id: ""
	I1030 19:50:14.482097  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.482110  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:50:14.482118  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:50:14.482186  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:50:14.520802  447486 cri.go:89] found id: ""
	I1030 19:50:14.520834  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.520843  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:50:14.520849  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:50:14.520900  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:50:14.559965  447486 cri.go:89] found id: ""
	I1030 19:50:14.559996  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.560006  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:50:14.560012  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:50:14.560062  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:50:14.601831  447486 cri.go:89] found id: ""
	I1030 19:50:14.601865  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.601875  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:50:14.601881  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:50:14.601932  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:50:14.635307  447486 cri.go:89] found id: ""
	I1030 19:50:14.635339  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.635348  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:50:14.635355  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:50:14.635418  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:50:14.668618  447486 cri.go:89] found id: ""
	I1030 19:50:14.668648  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.668657  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:50:14.668664  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:50:14.668726  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:50:14.702597  447486 cri.go:89] found id: ""
	I1030 19:50:14.702633  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.702644  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:50:14.702653  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:50:14.702715  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:50:14.736860  447486 cri.go:89] found id: ""
	I1030 19:50:14.736899  447486 logs.go:282] 0 containers: []
	W1030 19:50:14.736911  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:50:14.736925  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:50:14.736942  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:50:14.822015  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:50:14.822060  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:50:14.860153  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:50:14.860195  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:50:14.912230  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:50:14.912269  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:50:14.927032  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:50:14.927067  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:50:14.994401  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1030 19:50:13.975124  446965 pod_ready.go:82] duration metric: took 4m0.000158179s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" ...
	E1030 19:50:13.975173  446965 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4x9t6" in "kube-system" namespace to be "Ready" (will not retry!)
	I1030 19:50:13.975201  446965 pod_ready.go:39] duration metric: took 4m14.686087419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:13.975238  446965 kubeadm.go:597] duration metric: took 4m22.157012059s to restartPrimaryControlPlane
	W1030 19:50:13.975313  446965 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:13.975366  446965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:15.833986  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.835468  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:17.494865  447486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:50:17.509934  447486 kubeadm.go:597] duration metric: took 4m3.074434895s to restartPrimaryControlPlane
	W1030 19:50:17.510016  447486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 19:50:17.510051  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:50:18.496415  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:18.512328  447486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:18.522293  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:18.532752  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:18.532772  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:18.532823  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:18.542501  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:18.542560  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:18.552660  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:18.562585  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:18.562649  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:18.572321  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.581633  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:18.581689  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:18.592770  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:18.602414  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:18.602477  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:18.612334  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:18.844753  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:20.333715  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:22.832817  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:24.833349  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:27.332723  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:29.335009  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:31.832584  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:33.834506  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:36.333902  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:38.833159  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:40.157555  446965 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.182163055s)
	I1030 19:50:40.157637  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:50:40.174413  446965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 19:50:40.184817  446965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:50:40.195446  446965 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:50:40.195475  446965 kubeadm.go:157] found existing configuration files:
	
	I1030 19:50:40.195527  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:50:40.205509  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:50:40.205575  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:50:40.217343  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:50:40.227666  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:50:40.227729  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:50:40.237594  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.247151  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:50:40.247209  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:50:40.256854  446965 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:50:40.266306  446965 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:50:40.266379  446965 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:50:40.276409  446965 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:50:40.322080  446965 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1030 19:50:40.322174  446965 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:50:40.433056  446965 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:50:40.433251  446965 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:50:40.433390  446965 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1030 19:50:40.445085  446965 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:50:40.447192  446965 out.go:235]   - Generating certificates and keys ...
	I1030 19:50:40.447301  446965 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:50:40.447395  446965 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:50:40.447512  446965 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:50:40.447600  446965 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:50:40.447735  446965 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:50:40.447825  446965 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:50:40.447912  446965 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:50:40.447999  446965 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:50:40.448108  446965 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:50:40.448208  446965 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:50:40.448266  446965 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:50:40.448345  446965 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:50:40.590735  446965 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:50:40.714139  446965 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1030 19:50:40.808334  446965 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:50:40.940687  446965 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:50:41.085266  446965 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:50:41.085840  446965 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:50:41.088415  446965 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:50:41.090229  446965 out.go:235]   - Booting up control plane ...
	I1030 19:50:41.090349  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:50:41.090466  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:50:41.090573  446965 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:50:41.112262  446965 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:50:41.118809  446965 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:50:41.118919  446965 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:50:41.243915  446965 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1030 19:50:41.244093  446965 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1030 19:50:41.745362  446965 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.630697ms
	I1030 19:50:41.745513  446965 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1030 19:50:40.834005  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:42.834286  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:46.748431  446965 kubeadm.go:310] [api-check] The API server is healthy after 5.001587935s
	I1030 19:50:46.762271  446965 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 19:50:46.781785  446965 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 19:50:46.806338  446965 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 19:50:46.806613  446965 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-042402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 19:50:46.819762  446965 kubeadm.go:310] [bootstrap-token] Using token: k711fn.1we2gia9o31jm3ip
	I1030 19:50:46.821026  446965 out.go:235]   - Configuring RBAC rules ...
	I1030 19:50:46.821137  446965 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 19:50:46.827537  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 19:50:46.836653  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 19:50:46.844891  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 19:50:46.848423  446965 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 19:50:46.851674  446965 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 19:50:47.157946  446965 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 19:50:47.615774  446965 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 19:50:48.154429  446965 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 19:50:48.159547  446965 kubeadm.go:310] 
	I1030 19:50:48.159636  446965 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 19:50:48.159648  446965 kubeadm.go:310] 
	I1030 19:50:48.159762  446965 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 19:50:48.159776  446965 kubeadm.go:310] 
	I1030 19:50:48.159806  446965 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 19:50:48.159880  446965 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 19:50:48.159934  446965 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 19:50:48.159944  446965 kubeadm.go:310] 
	I1030 19:50:48.160029  446965 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 19:50:48.160040  446965 kubeadm.go:310] 
	I1030 19:50:48.160123  446965 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 19:50:48.160154  446965 kubeadm.go:310] 
	I1030 19:50:48.160242  446965 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 19:50:48.160351  446965 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 19:50:48.160440  446965 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 19:50:48.160450  446965 kubeadm.go:310] 
	I1030 19:50:48.160570  446965 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 19:50:48.160652  446965 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 19:50:48.160660  446965 kubeadm.go:310] 
	I1030 19:50:48.160729  446965 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.160818  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 \
	I1030 19:50:48.160838  446965 kubeadm.go:310] 	--control-plane 
	I1030 19:50:48.160846  446965 kubeadm.go:310] 
	I1030 19:50:48.160943  446965 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 19:50:48.160955  446965 kubeadm.go:310] 
	I1030 19:50:48.161065  446965 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k711fn.1we2gia9o31jm3ip \
	I1030 19:50:48.161205  446965 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b2143b9018fc79be6f35b8981f64b262448bd09f7227405c8dafa29481e81491 
	I1030 19:50:48.162302  446965 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:50:48.162390  446965 cni.go:84] Creating CNI manager for ""
	I1030 19:50:48.162408  446965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 19:50:48.164041  446965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 19:50:45.333255  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:47.334686  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:49.832993  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:48.165318  446965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 19:50:48.176702  446965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 19:50:48.199681  446965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 19:50:48.199776  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.199840  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-042402 minikube.k8s.io/updated_at=2024_10_30T19_50_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=embed-certs-042402 minikube.k8s.io/primary=true
	I1030 19:50:48.226617  446965 ops.go:34] apiserver oom_adj: -16
	I1030 19:50:48.404620  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:48.905366  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.405663  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:49.904925  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.405082  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:50.905099  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.404860  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:51.905534  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.405432  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:52.905289  446965 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 19:50:53.010770  446965 kubeadm.go:1113] duration metric: took 4.811061462s to wait for elevateKubeSystemPrivileges
	I1030 19:50:53.010818  446965 kubeadm.go:394] duration metric: took 5m1.251362756s to StartCluster
	I1030 19:50:53.010849  446965 settings.go:142] acquiring lock: {Name:mk3778f95b8c1775512fd8244c173f81fde2f8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.010948  446965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:50:53.012997  446965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19883-381834/kubeconfig: {Name:mkad9ac9beb9742fc1a420e6d4412db8b65962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 19:50:53.013284  446965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 19:50:53.013411  446965 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 19:50:53.013518  446965 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-042402"
	I1030 19:50:53.013539  446965 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-042402"
	I1030 19:50:53.013539  446965 config.go:182] Loaded profile config "embed-certs-042402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1030 19:50:53.013550  446965 addons.go:243] addon storage-provisioner should already be in state true
	I1030 19:50:53.013600  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013546  446965 addons.go:69] Setting default-storageclass=true in profile "embed-certs-042402"
	I1030 19:50:53.013605  446965 addons.go:69] Setting metrics-server=true in profile "embed-certs-042402"
	I1030 19:50:53.013635  446965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-042402"
	I1030 19:50:53.013642  446965 addons.go:234] Setting addon metrics-server=true in "embed-certs-042402"
	W1030 19:50:53.013650  446965 addons.go:243] addon metrics-server should already be in state true
	I1030 19:50:53.013675  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.013947  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014005  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014010  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014022  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.014058  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.014112  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.015033  446965 out.go:177] * Verifying Kubernetes components...
	I1030 19:50:53.016527  446965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 19:50:53.030033  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I1030 19:50:53.030290  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1030 19:50:53.030618  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.030733  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.031192  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031209  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031342  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.031356  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.031577  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.031773  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.031801  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.032289  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1030 19:50:53.032910  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.032953  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.033170  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.033684  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.033699  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.035082  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.035104  446965 addons.go:234] Setting addon default-storageclass=true in "embed-certs-042402"
	W1030 19:50:53.035124  446965 addons.go:243] addon default-storageclass should already be in state true
	I1030 19:50:53.035158  446965 host.go:66] Checking if "embed-certs-042402" exists ...
	I1030 19:50:53.035461  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.035492  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.036666  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.036697  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.054685  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1030 19:50:53.055271  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.055621  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I1030 19:50:53.055762  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.055779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.056073  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.056192  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.056410  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.056665  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.056688  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.057099  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.057693  446965 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19883-381834/.minikube/bin/docker-machine-driver-kvm2
	I1030 19:50:53.057741  446965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:50:53.058427  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.058756  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I1030 19:50:53.059684  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.060230  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.060253  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.060597  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.060806  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.060880  446965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 19:50:53.062367  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.062469  446965 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.062506  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 19:50:53.062526  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.063955  446965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1030 19:50:53.065131  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 19:50:53.065153  446965 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 19:50:53.065173  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.065987  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066607  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.066640  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.066723  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.066956  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.067102  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.067254  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.068475  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.068916  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.068939  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.069098  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.069288  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.069457  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.069625  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.075920  446965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1030 19:50:53.076341  446965 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:50:53.076758  446965 main.go:141] libmachine: Using API Version  1
	I1030 19:50:53.076779  446965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:50:53.077042  446965 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:50:53.077238  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetState
	I1030 19:50:53.078809  446965 main.go:141] libmachine: (embed-certs-042402) Calling .DriverName
	I1030 19:50:53.079065  446965 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.079088  446965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 19:50:53.079105  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHHostname
	I1030 19:50:53.081873  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082309  446965 main.go:141] libmachine: (embed-certs-042402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:aa:58", ip: ""} in network mk-embed-certs-042402: {Iface:virbr3 ExpiryTime:2024-10-30 20:45:37 +0000 UTC Type:0 Mac:52:54:00:61:aa:58 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:embed-certs-042402 Clientid:01:52:54:00:61:aa:58}
	I1030 19:50:53.082339  446965 main.go:141] libmachine: (embed-certs-042402) DBG | domain embed-certs-042402 has defined IP address 192.168.61.235 and MAC address 52:54:00:61:aa:58 in network mk-embed-certs-042402
	I1030 19:50:53.082515  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHPort
	I1030 19:50:53.082705  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHKeyPath
	I1030 19:50:53.082863  446965 main.go:141] libmachine: (embed-certs-042402) Calling .GetSSHUsername
	I1030 19:50:53.083061  446965 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/embed-certs-042402/id_rsa Username:docker}
	I1030 19:50:53.274313  446965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 19:50:53.305281  446965 node_ready.go:35] waiting up to 6m0s for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313184  446965 node_ready.go:49] node "embed-certs-042402" has status "Ready":"True"
	I1030 19:50:53.313217  446965 node_ready.go:38] duration metric: took 7.892097ms for node "embed-certs-042402" to be "Ready" ...
	I1030 19:50:53.313230  446965 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:50:53.321668  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:50:53.406960  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 19:50:53.427287  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 19:50:53.427324  446965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1030 19:50:53.475089  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 19:50:53.485983  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 19:50:53.486013  446965 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 19:50:53.570871  446965 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:53.570904  446965 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 19:50:53.670898  446965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 19:50:54.545328  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.138329529s)
	I1030 19:50:54.545384  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545383  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.070259573s)
	I1030 19:50:54.545399  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545426  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545445  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545720  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545732  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545748  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545757  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545761  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.545765  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.545787  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.545794  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.545802  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.545808  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.546139  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546162  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.546465  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.546468  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.546507  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.576380  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.576408  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.576738  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.576787  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.576804  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.703670  446965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032714873s)
	I1030 19:50:54.703724  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.703736  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704025  446965 main.go:141] libmachine: (embed-certs-042402) DBG | Closing plugin on server side
	I1030 19:50:54.704059  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704076  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704085  446965 main.go:141] libmachine: Making call to close driver server
	I1030 19:50:54.704104  446965 main.go:141] libmachine: (embed-certs-042402) Calling .Close
	I1030 19:50:54.704350  446965 main.go:141] libmachine: Successfully made call to close driver server
	I1030 19:50:54.704362  446965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 19:50:54.704374  446965 addons.go:475] Verifying addon metrics-server=true in "embed-certs-042402"
	I1030 19:50:54.706330  446965 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1030 19:50:51.833654  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.333879  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:54.707723  446965 addons.go:510] duration metric: took 1.694322523s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1030 19:50:55.328470  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:57.828224  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:56.832967  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:58.833284  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:50:59.828636  446965 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:01.828151  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.828178  446965 pod_ready.go:82] duration metric: took 8.506481998s for pod "coredns-7c65d6cfc9-hvg4g" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.828187  446965 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833094  446965 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.833121  446965 pod_ready.go:82] duration metric: took 4.926401ms for pod "coredns-7c65d6cfc9-pzbpd" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.833133  446965 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837391  446965 pod_ready.go:93] pod "etcd-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:01.837410  446965 pod_ready.go:82] duration metric: took 4.27047ms for pod "etcd-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:01.837419  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344200  446965 pod_ready.go:93] pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.344224  446965 pod_ready.go:82] duration metric: took 506.798667ms for pod "kube-apiserver-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.344233  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349020  446965 pod_ready.go:93] pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.349042  446965 pod_ready.go:82] duration metric: took 4.801739ms for pod "kube-controller-manager-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.349055  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626109  446965 pod_ready.go:93] pod "kube-proxy-m9zwz" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:02.626137  446965 pod_ready.go:82] duration metric: took 277.074567ms for pod "kube-proxy-m9zwz" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:02.626146  446965 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027456  446965 pod_ready.go:93] pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace has status "Ready":"True"
	I1030 19:51:03.027482  446965 pod_ready.go:82] duration metric: took 401.329277ms for pod "kube-scheduler-embed-certs-042402" in "kube-system" namespace to be "Ready" ...
	I1030 19:51:03.027493  446965 pod_ready.go:39] duration metric: took 9.714247169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:03.027513  446965 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:03.027579  446965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:03.043403  446965 api_server.go:72] duration metric: took 10.030078869s to wait for apiserver process to appear ...
	I1030 19:51:03.043431  446965 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:03.043456  446965 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I1030 19:51:03.048722  446965 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I1030 19:51:03.049572  446965 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:03.049595  446965 api_server.go:131] duration metric: took 6.156928ms to wait for apiserver health ...
	I1030 19:51:03.049603  446965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:03.233170  446965 system_pods.go:59] 9 kube-system pods found
	I1030 19:51:03.233205  446965 system_pods.go:61] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.233212  446965 system_pods.go:61] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.233217  446965 system_pods.go:61] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.233222  446965 system_pods.go:61] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.233227  446965 system_pods.go:61] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.233231  446965 system_pods.go:61] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.233236  446965 system_pods.go:61] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.233247  446965 system_pods.go:61] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.233255  446965 system_pods.go:61] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.233272  446965 system_pods.go:74] duration metric: took 183.660307ms to wait for pod list to return data ...
	I1030 19:51:03.233287  446965 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:03.427520  446965 default_sa.go:45] found service account: "default"
	I1030 19:51:03.427550  446965 default_sa.go:55] duration metric: took 194.254547ms for default service account to be created ...
	I1030 19:51:03.427562  446965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:03.629316  446965 system_pods.go:86] 9 kube-system pods found
	I1030 19:51:03.629351  446965 system_pods.go:89] "coredns-7c65d6cfc9-hvg4g" [f9e7e143-3e12-4c1a-9fb0-6f58a37f8a55] Running
	I1030 19:51:03.629364  446965 system_pods.go:89] "coredns-7c65d6cfc9-pzbpd" [8f486ff4-c665-42ec-9b98-15ea94e0ded8] Running
	I1030 19:51:03.629370  446965 system_pods.go:89] "etcd-embed-certs-042402" [5149c262-698b-46be-9b15-c7b5e93fd3cd] Running
	I1030 19:51:03.629377  446965 system_pods.go:89] "kube-apiserver-embed-certs-042402" [9a6c3f9a-4b89-4e8d-abb2-445399eea3eb] Running
	I1030 19:51:03.629381  446965 system_pods.go:89] "kube-controller-manager-embed-certs-042402" [cce89309-3c5f-4d17-8426-fdb8a02acdb0] Running
	I1030 19:51:03.629386  446965 system_pods.go:89] "kube-proxy-m9zwz" [e7b6fb8b-2287-47c0-b9c8-a3b1c3020894] Running
	I1030 19:51:03.629391  446965 system_pods.go:89] "kube-scheduler-embed-certs-042402" [e408e85c-ac6c-4afb-8391-935b7c579b4f] Running
	I1030 19:51:03.629399  446965 system_pods.go:89] "metrics-server-6867b74b74-6hrq4" [a5bb1778-0a28-4649-a2ac-a5f0e1b810de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:03.629405  446965 system_pods.go:89] "storage-provisioner" [729733b2-e703-4e9b-9d05-a2f0fb632149] Running
	I1030 19:51:03.629418  446965 system_pods.go:126] duration metric: took 201.847233ms to wait for k8s-apps to be running ...
	I1030 19:51:03.629432  446965 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:03.629486  446965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:03.649120  446965 system_svc.go:56] duration metric: took 19.675022ms WaitForService to wait for kubelet
	I1030 19:51:03.649166  446965 kubeadm.go:582] duration metric: took 10.635844977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:03.649192  446965 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:03.826763  446965 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:03.826790  446965 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:03.826803  446965 node_conditions.go:105] duration metric: took 177.604616ms to run NodePressure ...
	I1030 19:51:03.826819  446965 start.go:241] waiting for startup goroutines ...
	I1030 19:51:03.826827  446965 start.go:246] waiting for cluster config update ...
	I1030 19:51:03.826841  446965 start.go:255] writing updated cluster config ...
	I1030 19:51:03.827126  446965 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:03.877974  446965 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:03.880121  446965 out.go:177] * Done! kubectl is now configured to use "embed-certs-042402" cluster and "default" namespace by default
	I1030 19:51:00.833673  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:03.333042  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:05.333431  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:07.833229  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:09.833772  446736 pod_ready.go:103] pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace has status "Ready":"False"
	I1030 19:51:10.833131  446736 pod_ready.go:82] duration metric: took 4m0.006526983s for pod "metrics-server-6867b74b74-72bb5" in "kube-system" namespace to be "Ready" ...
	E1030 19:51:10.833166  446736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1030 19:51:10.833178  446736 pod_ready.go:39] duration metric: took 4m7.416690025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 19:51:10.833200  446736 api_server.go:52] waiting for apiserver process to appear ...
	I1030 19:51:10.833239  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:10.833300  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:10.884016  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:10.884046  446736 cri.go:89] found id: ""
	I1030 19:51:10.884055  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:10.884108  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.888789  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:10.888857  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:10.931994  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:10.932037  446736 cri.go:89] found id: ""
	I1030 19:51:10.932047  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:10.932097  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.937113  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:10.937181  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:10.977951  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:10.977982  446736 cri.go:89] found id: ""
	I1030 19:51:10.977993  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:10.978050  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:10.982791  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:10.982863  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:11.021741  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.021770  446736 cri.go:89] found id: ""
	I1030 19:51:11.021780  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:11.021837  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.026590  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:11.026653  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:11.068839  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.068873  446736 cri.go:89] found id: ""
	I1030 19:51:11.068885  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:11.068946  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.073103  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:11.073171  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:11.108404  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.108432  446736 cri.go:89] found id: ""
	I1030 19:51:11.108443  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:11.108506  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.112903  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:11.112974  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:11.153767  446736 cri.go:89] found id: ""
	I1030 19:51:11.153800  446736 logs.go:282] 0 containers: []
	W1030 19:51:11.153812  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:11.153821  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:11.153892  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:11.194649  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.194681  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.194687  446736 cri.go:89] found id: ""
	I1030 19:51:11.194697  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:11.194770  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.199037  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:11.202957  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:11.202984  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:11.246187  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:11.246220  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:11.286608  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:11.286643  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:11.339119  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:11.339157  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:11.376624  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:11.376653  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:11.411401  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:11.411431  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:11.481668  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:11.481710  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:11.497767  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:11.497799  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:11.612001  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:11.612034  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:11.656553  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:11.656589  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:11.695387  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:11.695428  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:11.732386  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:11.732419  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:12.217007  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:12.217056  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:14.769155  446736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:51:14.787096  446736 api_server.go:72] duration metric: took 4m17.097569041s to wait for apiserver process to appear ...
	I1030 19:51:14.787128  446736 api_server.go:88] waiting for apiserver healthz status ...
	I1030 19:51:14.787176  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:14.787235  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:14.823506  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:14.823533  446736 cri.go:89] found id: ""
	I1030 19:51:14.823541  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:14.823595  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.828125  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:14.828214  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:14.867890  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:14.867914  446736 cri.go:89] found id: ""
	I1030 19:51:14.867922  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:14.867970  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.873213  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:14.873283  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:14.913068  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:14.913103  446736 cri.go:89] found id: ""
	I1030 19:51:14.913114  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:14.913179  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.918380  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:14.918459  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:14.956150  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:14.956177  446736 cri.go:89] found id: ""
	I1030 19:51:14.956187  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:14.956294  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:14.960781  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:14.960836  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:15.001804  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.001833  446736 cri.go:89] found id: ""
	I1030 19:51:15.001844  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:15.001893  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.006341  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:15.006401  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:15.045202  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.045236  446736 cri.go:89] found id: ""
	I1030 19:51:15.045247  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:15.045326  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.051967  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:15.052031  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:15.091569  446736 cri.go:89] found id: ""
	I1030 19:51:15.091596  446736 logs.go:282] 0 containers: []
	W1030 19:51:15.091604  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:15.091611  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:15.091668  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:15.135521  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:15.135551  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:15.135557  446736 cri.go:89] found id: ""
	I1030 19:51:15.135567  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:15.135633  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.140215  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:15.145490  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:15.145514  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:15.205939  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:15.205972  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:15.240157  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:15.240194  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:15.277168  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:15.277200  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:15.708451  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:15.708499  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:15.750544  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:15.750577  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:15.820071  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:15.820113  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:15.870259  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:15.870293  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:15.919968  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:15.919998  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:15.976948  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:15.976992  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:16.014451  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:16.014498  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:16.047766  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:16.047806  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:16.070539  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:16.070567  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:18.677834  446736 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I1030 19:51:18.682862  446736 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I1030 19:51:18.684023  446736 api_server.go:141] control plane version: v1.31.2
	I1030 19:51:18.684046  446736 api_server.go:131] duration metric: took 3.896911154s to wait for apiserver health ...
	I1030 19:51:18.684055  446736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 19:51:18.684083  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:51:18.684130  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:51:18.724815  446736 cri.go:89] found id: "990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:18.724848  446736 cri.go:89] found id: ""
	I1030 19:51:18.724860  446736 logs.go:282] 1 containers: [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4]
	I1030 19:51:18.724928  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.729332  446736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:51:18.729391  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:51:18.767614  446736 cri.go:89] found id: "ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:18.767642  446736 cri.go:89] found id: ""
	I1030 19:51:18.767651  446736 logs.go:282] 1 containers: [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67]
	I1030 19:51:18.767705  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.772420  446736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:51:18.772525  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:51:18.811459  446736 cri.go:89] found id: "1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:18.811489  446736 cri.go:89] found id: ""
	I1030 19:51:18.811501  446736 logs.go:282] 1 containers: [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd]
	I1030 19:51:18.811563  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.816844  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:51:18.816906  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:51:18.853273  446736 cri.go:89] found id: "2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:18.853299  446736 cri.go:89] found id: ""
	I1030 19:51:18.853308  446736 logs.go:282] 1 containers: [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889]
	I1030 19:51:18.853362  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.857867  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:51:18.857946  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:51:18.907021  446736 cri.go:89] found id: "0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:18.907052  446736 cri.go:89] found id: ""
	I1030 19:51:18.907063  446736 logs.go:282] 1 containers: [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9]
	I1030 19:51:18.907126  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.913432  446736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:51:18.913506  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:51:18.978047  446736 cri.go:89] found id: "cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:18.978072  446736 cri.go:89] found id: ""
	I1030 19:51:18.978083  446736 logs.go:282] 1 containers: [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c]
	I1030 19:51:18.978150  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:18.983158  446736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:51:18.983241  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:51:19.018992  446736 cri.go:89] found id: ""
	I1030 19:51:19.019018  446736 logs.go:282] 0 containers: []
	W1030 19:51:19.019026  446736 logs.go:284] No container was found matching "kindnet"
	I1030 19:51:19.019035  446736 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1030 19:51:19.019094  446736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1030 19:51:19.053821  446736 cri.go:89] found id: "822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.053850  446736 cri.go:89] found id: "de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.053855  446736 cri.go:89] found id: ""
	I1030 19:51:19.053862  446736 logs.go:282] 2 containers: [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef]
	I1030 19:51:19.053922  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.063575  446736 ssh_runner.go:195] Run: which crictl
	I1030 19:51:19.069254  446736 logs.go:123] Gathering logs for kubelet ...
	I1030 19:51:19.069283  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:51:19.139641  446736 logs.go:123] Gathering logs for kube-apiserver [990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4] ...
	I1030 19:51:19.139700  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 990c5503542ebf376a5dab046f984b9b3d2f5639b1ca7e3e504bd5e135c0f1c4"
	I1030 19:51:19.198020  446736 logs.go:123] Gathering logs for kube-scheduler [2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889] ...
	I1030 19:51:19.198059  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2873bfc8ed2a78dba2d31acaa500a922b4c47ff6bac05ed23fd2eb758cb02889"
	I1030 19:51:19.239685  446736 logs.go:123] Gathering logs for kube-proxy [0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9] ...
	I1030 19:51:19.239727  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0621c8e7bb77bbbfbfdc8654fcef484781ddfed5f785fe5da4f950a7c0ecfcf9"
	I1030 19:51:19.281510  446736 logs.go:123] Gathering logs for storage-provisioner [de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef] ...
	I1030 19:51:19.281545  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9271f5ab996bf4c247a39dea4e0de946a6b1f7f8773ec2c2b4ab3b50789bef"
	I1030 19:51:19.317842  446736 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:51:19.317872  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:51:19.659645  446736 logs.go:123] Gathering logs for dmesg ...
	I1030 19:51:19.659697  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:51:19.678087  446736 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:51:19.678121  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 19:51:19.778504  446736 logs.go:123] Gathering logs for etcd [ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67] ...
	I1030 19:51:19.778540  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace7f40d51794d7c9ddf41b023b5aecaa6a1d558203ef1d87a0f3fdc770b7c67"
	I1030 19:51:19.826520  446736 logs.go:123] Gathering logs for coredns [1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd] ...
	I1030 19:51:19.826552  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b9bfc15731704a275ad6c21dd5d647902332e96f1b1c2c4b17fdd07d4a561cd"
	I1030 19:51:19.863959  446736 logs.go:123] Gathering logs for kube-controller-manager [cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c] ...
	I1030 19:51:19.864011  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf0541a4e58440020ff1b8a1797553a543ad198758578aa6c59821dd03bf753c"
	I1030 19:51:19.915777  446736 logs.go:123] Gathering logs for storage-provisioner [822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4] ...
	I1030 19:51:19.915814  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 822348d485756307d11619c3fd8a0037995fe69b362879e513d71132d0018eb4"
	I1030 19:51:19.953036  446736 logs.go:123] Gathering logs for container status ...
	I1030 19:51:19.953069  446736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:51:22.502129  446736 system_pods.go:59] 8 kube-system pods found
	I1030 19:51:22.502162  446736 system_pods.go:61] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.502167  446736 system_pods.go:61] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.502172  446736 system_pods.go:61] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.502175  446736 system_pods.go:61] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.502179  446736 system_pods.go:61] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.502182  446736 system_pods.go:61] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.502188  446736 system_pods.go:61] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.502193  446736 system_pods.go:61] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.502201  446736 system_pods.go:74] duration metric: took 3.818141259s to wait for pod list to return data ...
	I1030 19:51:22.502209  446736 default_sa.go:34] waiting for default service account to be created ...
	I1030 19:51:22.504541  446736 default_sa.go:45] found service account: "default"
	I1030 19:51:22.504562  446736 default_sa.go:55] duration metric: took 2.346763ms for default service account to be created ...
	I1030 19:51:22.504570  446736 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 19:51:22.509016  446736 system_pods.go:86] 8 kube-system pods found
	I1030 19:51:22.509039  446736 system_pods.go:89] "coredns-7c65d6cfc9-6cdl4" [95a0a01a-b0ea-4e0c-ac44-b7f7a486e4e0] Running
	I1030 19:51:22.509044  446736 system_pods.go:89] "etcd-no-preload-960512" [481cc4bc-fe2e-48fd-a486-574daf879096] Running
	I1030 19:51:22.509048  446736 system_pods.go:89] "kube-apiserver-no-preload-960512" [3662715f-dd2b-4522-aed3-3857e1104b8c] Running
	I1030 19:51:22.509052  446736 system_pods.go:89] "kube-controller-manager-no-preload-960512" [cb6045a8-1703-4693-90bf-81c7c990cb38] Running
	I1030 19:51:22.509055  446736 system_pods.go:89] "kube-proxy-fxqqc" [58db3fab-21e3-41b7-99f9-46ba3081db97] Running
	I1030 19:51:22.509058  446736 system_pods.go:89] "kube-scheduler-no-preload-960512" [6e293c25-ba96-4213-b107-236a1b828918] Running
	I1030 19:51:22.509101  446736 system_pods.go:89] "metrics-server-6867b74b74-72bb5" [7734d879-b974-42fd-9610-7e81ee6cbc13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 19:51:22.509112  446736 system_pods.go:89] "storage-provisioner" [d4637a77-26ab-4013-a705-08317c00dd3b] Running
	I1030 19:51:22.509119  446736 system_pods.go:126] duration metric: took 4.544102ms to wait for k8s-apps to be running ...
	I1030 19:51:22.509125  446736 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 19:51:22.509172  446736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:51:22.524883  446736 system_svc.go:56] duration metric: took 15.747977ms WaitForService to wait for kubelet
	I1030 19:51:22.524906  446736 kubeadm.go:582] duration metric: took 4m24.835384605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 19:51:22.524929  446736 node_conditions.go:102] verifying NodePressure condition ...
	I1030 19:51:22.528315  446736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1030 19:51:22.528334  446736 node_conditions.go:123] node cpu capacity is 2
	I1030 19:51:22.528345  446736 node_conditions.go:105] duration metric: took 3.411421ms to run NodePressure ...
	I1030 19:51:22.528357  446736 start.go:241] waiting for startup goroutines ...
	I1030 19:51:22.528364  446736 start.go:246] waiting for cluster config update ...
	I1030 19:51:22.528374  446736 start.go:255] writing updated cluster config ...
	I1030 19:51:22.528621  446736 ssh_runner.go:195] Run: rm -f paused
	I1030 19:51:22.577143  446736 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1030 19:51:22.580061  446736 out.go:177] * Done! kubectl is now configured to use "no-preload-960512" cluster and "default" namespace by default
	I1030 19:52:15.582907  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:52:15.583009  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:52:15.584345  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:15.584419  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:15.584522  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:15.584659  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:15.584763  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:15.584827  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:15.586931  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:15.587016  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:15.587074  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:15.587145  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:15.587198  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:15.587271  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:15.587339  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:15.587402  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:15.587455  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:15.587517  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:15.587577  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:15.587608  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:15.587682  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:15.587759  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:15.587846  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:15.587924  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:15.587988  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:15.588076  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:15.588148  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:15.588180  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:15.588267  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:15.589722  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:15.589834  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:15.589932  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:15.590014  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:15.590128  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:15.590285  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:15.590336  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:15.590388  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590560  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590642  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.590842  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.590946  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591155  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591253  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591513  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591609  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:15.591841  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:52:15.591855  447486 kubeadm.go:310] 
	I1030 19:52:15.591900  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:52:15.591956  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:52:15.591966  447486 kubeadm.go:310] 
	I1030 19:52:15.592008  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:52:15.592051  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:52:15.592192  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:52:15.592204  447486 kubeadm.go:310] 
	I1030 19:52:15.592318  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:52:15.592360  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:52:15.592391  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:52:15.592397  447486 kubeadm.go:310] 
	I1030 19:52:15.592511  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:52:15.592592  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:52:15.592600  447486 kubeadm.go:310] 
	I1030 19:52:15.592733  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:52:15.592850  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:52:15.592959  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:52:15.593059  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:52:15.593138  447486 kubeadm.go:310] 
	W1030 19:52:15.593236  447486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1030 19:52:15.593289  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1030 19:52:16.049810  447486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:52:16.065820  447486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 19:52:16.076166  447486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 19:52:16.076192  447486 kubeadm.go:157] found existing configuration files:
	
	I1030 19:52:16.076241  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1030 19:52:16.085309  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 19:52:16.085380  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 19:52:16.094868  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1030 19:52:16.104343  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 19:52:16.104395  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 19:52:16.113939  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.122836  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 19:52:16.122885  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 19:52:16.132083  447486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1030 19:52:16.141441  447486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 19:52:16.141487  447486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 19:52:16.150710  447486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 19:52:16.222070  447486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1030 19:52:16.222183  447486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 19:52:16.366061  447486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 19:52:16.366194  447486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 19:52:16.366352  447486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 19:52:16.541086  447486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 19:52:16.543200  447486 out.go:235]   - Generating certificates and keys ...
	I1030 19:52:16.543303  447486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 19:52:16.543398  447486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 19:52:16.543523  447486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 19:52:16.543625  447486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 19:52:16.543749  447486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 19:52:16.543848  447486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 19:52:16.543942  447486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 19:52:16.544020  447486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 19:52:16.544096  447486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 19:52:16.544193  447486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 19:52:16.544252  447486 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 19:52:16.544343  447486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 19:52:16.637454  447486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 19:52:16.829430  447486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 19:52:16.985259  447486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 19:52:17.072312  447486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 19:52:17.092511  447486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 19:52:17.093595  447486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 19:52:17.093654  447486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 19:52:17.228039  447486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 19:52:17.229647  447486 out.go:235]   - Booting up control plane ...
	I1030 19:52:17.229766  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 19:52:17.237333  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 19:52:17.239644  447486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 19:52:17.239774  447486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 19:52:17.241037  447486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 19:52:57.243167  447486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1030 19:52:57.243769  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:52:57.244072  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:02.244240  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:02.244563  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:12.244991  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:12.245293  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:53:32.246428  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:53:32.246697  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.247834  447486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1030 19:54:12.248150  447486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1030 19:54:12.248173  447486 kubeadm.go:310] 
	I1030 19:54:12.248226  447486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1030 19:54:12.248308  447486 kubeadm.go:310] 		timed out waiting for the condition
	I1030 19:54:12.248336  447486 kubeadm.go:310] 
	I1030 19:54:12.248386  447486 kubeadm.go:310] 	This error is likely caused by:
	I1030 19:54:12.248449  447486 kubeadm.go:310] 		- The kubelet is not running
	I1030 19:54:12.248598  447486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1030 19:54:12.248609  447486 kubeadm.go:310] 
	I1030 19:54:12.248747  447486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1030 19:54:12.248811  447486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1030 19:54:12.248867  447486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1030 19:54:12.248876  447486 kubeadm.go:310] 
	I1030 19:54:12.249013  447486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1030 19:54:12.249111  447486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1030 19:54:12.249129  447486 kubeadm.go:310] 
	I1030 19:54:12.249280  447486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1030 19:54:12.249447  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1030 19:54:12.249564  447486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1030 19:54:12.249662  447486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1030 19:54:12.249708  447486 kubeadm.go:310] 
	I1030 19:54:12.249878  447486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 19:54:12.250015  447486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1030 19:54:12.250208  447486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1030 19:54:12.250221  447486 kubeadm.go:394] duration metric: took 7m57.874179721s to StartCluster
	I1030 19:54:12.250311  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1030 19:54:12.250399  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1030 19:54:12.292692  447486 cri.go:89] found id: ""
	I1030 19:54:12.292749  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.292760  447486 logs.go:284] No container was found matching "kube-apiserver"
	I1030 19:54:12.292770  447486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1030 19:54:12.292840  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1030 19:54:12.329792  447486 cri.go:89] found id: ""
	I1030 19:54:12.329825  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.329835  447486 logs.go:284] No container was found matching "etcd"
	I1030 19:54:12.329843  447486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1030 19:54:12.329905  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1030 19:54:12.364661  447486 cri.go:89] found id: ""
	I1030 19:54:12.364693  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.364702  447486 logs.go:284] No container was found matching "coredns"
	I1030 19:54:12.364709  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1030 19:54:12.364764  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1030 19:54:12.400842  447486 cri.go:89] found id: ""
	I1030 19:54:12.400870  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.400878  447486 logs.go:284] No container was found matching "kube-scheduler"
	I1030 19:54:12.400885  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1030 19:54:12.400943  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1030 19:54:12.440135  447486 cri.go:89] found id: ""
	I1030 19:54:12.440164  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.440172  447486 logs.go:284] No container was found matching "kube-proxy"
	I1030 19:54:12.440178  447486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1030 19:54:12.440228  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1030 19:54:12.476365  447486 cri.go:89] found id: ""
	I1030 19:54:12.476403  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.476416  447486 logs.go:284] No container was found matching "kube-controller-manager"
	I1030 19:54:12.476425  447486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1030 19:54:12.476503  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1030 19:54:12.519669  447486 cri.go:89] found id: ""
	I1030 19:54:12.519702  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.519715  447486 logs.go:284] No container was found matching "kindnet"
	I1030 19:54:12.519724  447486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1030 19:54:12.519791  447486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1030 19:54:12.554180  447486 cri.go:89] found id: ""
	I1030 19:54:12.554218  447486 logs.go:282] 0 containers: []
	W1030 19:54:12.554230  447486 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1030 19:54:12.554244  447486 logs.go:123] Gathering logs for CRI-O ...
	I1030 19:54:12.554261  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1030 19:54:12.669617  447486 logs.go:123] Gathering logs for container status ...
	I1030 19:54:12.669660  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 19:54:12.708361  447486 logs.go:123] Gathering logs for kubelet ...
	I1030 19:54:12.708392  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 19:54:12.763103  447486 logs.go:123] Gathering logs for dmesg ...
	I1030 19:54:12.763145  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 19:54:12.778676  447486 logs.go:123] Gathering logs for describe nodes ...
	I1030 19:54:12.778712  447486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1030 19:54:12.865694  447486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1030 19:54:12.865732  447486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1030 19:54:12.865797  447486 out.go:270] * 
	W1030 19:54:12.865908  447486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.865929  447486 out.go:270] * 
	W1030 19:54:12.867124  447486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 19:54:12.871111  447486 out.go:201] 
	W1030 19:54:12.872534  447486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1030 19:54:12.872591  447486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1030 19:54:12.872616  447486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1030 19:54:12.874145  447486 out.go:201] 
	
	
	==> CRI-O <==
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.305538356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318717305512371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1023ae44-33f1-48b4-a3eb-d62fca7674ff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.306163437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74167a1d-3817-4971-b31c-d55838247422 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.306212306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74167a1d-3817-4971-b31c-d55838247422 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.306251413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=74167a1d-3817-4971-b31c-d55838247422 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.337498988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b6ea2a5-0c6c-4bb8-a6a2-36a64dacf780 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.337609391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b6ea2a5-0c6c-4bb8-a6a2-36a64dacf780 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.338646119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e90d7a79-87e1-4a87-976b-af23c9639ea6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.339088203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318717339068064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e90d7a79-87e1-4a87-976b-af23c9639ea6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.339659237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0895c00a-e379-4de3-ad15-6cd133f4b4da name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.339709603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0895c00a-e379-4de3-ad15-6cd133f4b4da name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.339743393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0895c00a-e379-4de3-ad15-6cd133f4b4da name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.370868168Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=557bec74-f1ac-40eb-a5b3-b6cd5995417f name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.370971926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=557bec74-f1ac-40eb-a5b3-b6cd5995417f name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.372149788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a89a6da6-2930-4a35-a431-b5ffc61ecd01 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.372511091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318717372486201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a89a6da6-2930-4a35-a431-b5ffc61ecd01 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.373422748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7060788c-2766-4570-8aa5-d94738117f79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.373498197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7060788c-2766-4570-8aa5-d94738117f79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.373539840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7060788c-2766-4570-8aa5-d94738117f79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.404668363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5084490b-05e2-4202-9496-1d08e1cd5ed5 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.404800482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5084490b-05e2-4202-9496-1d08e1cd5ed5 name=/runtime.v1.RuntimeService/Version
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.405569082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78fde147-603f-422b-a0fe-5476cf8c1561 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.405978365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730318717405952613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78fde147-603f-422b-a0fe-5476cf8c1561 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.406464039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4141513-fd3f-4586-a7e8-b1c282553baf name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.406524393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4141513-fd3f-4586-a7e8-b1c282553baf name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 20:05:17 old-k8s-version-516975 crio[630]: time="2024-10-30 20:05:17.406566963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e4141513-fd3f-4586-a7e8-b1c282553baf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct30 19:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055573] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039872] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137495] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.588302] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607660] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct30 19:46] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.060505] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061237] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.181319] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.145340] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.258638] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.609500] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.068837] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.029529] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.374948] kauditd_printk_skb: 46 callbacks suppressed
	[Oct30 19:50] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Oct30 19:52] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +0.064946] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:05:17 up 19 min,  0 users,  load average: 0.04, 0.07, 0.03
	Linux old-k8s-version-516975 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000bfc660, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000a1b740, 0x24, 0x0, ...)
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]: net.(*Dialer).DialContext(0xc000108d20, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a1b740, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000912280, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a1b740, 0x24, 0x60, 0x7f57181eade8, 0x118, ...)
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]: net/http.(*Transport).dial(0xc0007bc140, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a1b740, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]: net/http.(*Transport).dialConn(0xc0007bc140, 0x4f7fe00, 0xc000052030, 0x0, 0xc00072c3c0, 0x5, 0xc000a1b740, 0x24, 0x0, 0xc00094efc0, ...)
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]: net/http.(*Transport).dialConnFor(0xc0007bc140, 0xc000a7d130)
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]: created by net/http.(*Transport).queueForDial
	Oct 30 20:05:13 old-k8s-version-516975 kubelet[6826]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 30 20:05:13 old-k8s-version-516975 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 30 20:05:13 old-k8s-version-516975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 30 20:05:14 old-k8s-version-516975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Oct 30 20:05:14 old-k8s-version-516975 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 30 20:05:14 old-k8s-version-516975 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 30 20:05:14 old-k8s-version-516975 kubelet[6836]: I1030 20:05:14.433163    6836 server.go:416] Version: v1.20.0
	Oct 30 20:05:14 old-k8s-version-516975 kubelet[6836]: I1030 20:05:14.433494    6836 server.go:837] Client rotation is on, will bootstrap in background
	Oct 30 20:05:14 old-k8s-version-516975 kubelet[6836]: I1030 20:05:14.435466    6836 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 30 20:05:14 old-k8s-version-516975 kubelet[6836]: W1030 20:05:14.436545    6836 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 30 20:05:14 old-k8s-version-516975 kubelet[6836]: I1030 20:05:14.436910    6836 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 2 (229.118577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-516975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (118.97s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 43.61
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 17.86
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.14
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 84.84
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 209.92
31 TestAddons/serial/GCPAuth/Namespaces 1.96
32 TestAddons/serial/GCPAuth/FakeCredentials 13.55
35 TestAddons/parallel/Registry 19.6
37 TestAddons/parallel/InspektorGadget 10.72
40 TestAddons/parallel/CSI 64.05
41 TestAddons/parallel/Headlamp 20.57
42 TestAddons/parallel/CloudSpanner 5.55
43 TestAddons/parallel/LocalPath 63.1
44 TestAddons/parallel/NvidiaDevicePlugin 6.99
45 TestAddons/parallel/Yakd 10.85
48 TestCertOptions 44.29
49 TestCertExpiration 265.12
51 TestForceSystemdFlag 60.1
52 TestForceSystemdEnv 96.02
54 TestKVMDriverInstallOrUpdate 11.79
58 TestErrorSpam/setup 41.49
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.78
61 TestErrorSpam/pause 1.58
62 TestErrorSpam/unpause 1.74
63 TestErrorSpam/stop 5.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.39
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 53.67
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.52
75 TestFunctional/serial/CacheCmd/cache/add_local 2.87
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 34.62
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.41
86 TestFunctional/serial/LogsFileCmd 1.41
87 TestFunctional/serial/InvalidService 5.34
89 TestFunctional/parallel/ConfigCmd 0.37
90 TestFunctional/parallel/DashboardCmd 26.47
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.82
97 TestFunctional/parallel/ServiceCmdConnect 12.64
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 54.74
101 TestFunctional/parallel/SSHCmd 0.44
102 TestFunctional/parallel/CpCmd 1.2
103 TestFunctional/parallel/MySQL 30.12
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.41
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 0.87
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.6
117 TestFunctional/parallel/ProfileCmd/profile_list 0.37
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
128 TestFunctional/parallel/MountCmd/any-port 15.92
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.66
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.64
133 TestFunctional/parallel/ImageCommands/ImageBuild 11.55
134 TestFunctional/parallel/ImageCommands/Setup 3.24
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.42
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.21
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.62
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.09
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
142 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
146 TestFunctional/parallel/MountCmd/specific-port 2.12
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.36
148 TestFunctional/parallel/ServiceCmd/List 0.45
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
151 TestFunctional/parallel/ServiceCmd/Format 0.3
152 TestFunctional/parallel/ServiceCmd/URL 0.29
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 199.56
160 TestMultiControlPlane/serial/DeployApp 11.01
161 TestMultiControlPlane/serial/PingHostFromPods 1.22
162 TestMultiControlPlane/serial/AddWorkerNode 57.76
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
165 TestMultiControlPlane/serial/CopyFile 12.98
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.69
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
174 TestMultiControlPlane/serial/RestartCluster 217.33
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 80.51
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
181 TestJSONOutput/start/Command 55.01
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.62
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.34
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 88.05
213 TestMountStart/serial/StartWithMountFirst 28
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 27.28
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 0.7
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 23.71
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 116.86
225 TestMultiNode/serial/DeployApp2Nodes 8.71
226 TestMultiNode/serial/PingHostFrom2Pods 0.79
227 TestMultiNode/serial/AddNode 54.5
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.18
231 TestMultiNode/serial/StopNode 2.3
232 TestMultiNode/serial/StartAfterStop 39.91
234 TestMultiNode/serial/DeleteNode 2.14
236 TestMultiNode/serial/RestartMultiNode 202.47
237 TestMultiNode/serial/ValidateNameConflict 48.94
244 TestScheduledStopUnix 113.65
248 TestRunningBinaryUpgrade 165.5
253 TestPause/serial/Start 107.43
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 119.22
264 TestNetworkPlugins/group/false 3.28
268 TestStoppedBinaryUpgrade/Setup 3.63
269 TestPause/serial/SecondStartNoReconfiguration 64.76
270 TestStoppedBinaryUpgrade/Upgrade 177.97
271 TestNoKubernetes/serial/StartWithStopK8s 28.41
272 TestNoKubernetes/serial/Start 52.42
273 TestPause/serial/Pause 0.79
274 TestPause/serial/VerifyStatus 0.25
275 TestPause/serial/Unpause 0.67
276 TestPause/serial/PauseAgain 0.83
277 TestPause/serial/DeletePaused 1.16
278 TestPause/serial/VerifyDeletedResources 0.69
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
280 TestNoKubernetes/serial/ProfileList 6.04
281 TestNoKubernetes/serial/Stop 1.33
282 TestNoKubernetes/serial/StartNoArgs 41.1
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
292 TestNetworkPlugins/group/auto/Start 73.49
293 TestNetworkPlugins/group/kindnet/Start 85.92
294 TestNetworkPlugins/group/auto/KubeletFlags 0.3
295 TestNetworkPlugins/group/auto/NetCatPod 16.12
296 TestNetworkPlugins/group/auto/DNS 0.16
297 TestNetworkPlugins/group/auto/Localhost 0.17
298 TestNetworkPlugins/group/auto/HairPin 0.16
299 TestNetworkPlugins/group/calico/Start 85.44
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
302 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
303 TestNetworkPlugins/group/kindnet/DNS 0.16
304 TestNetworkPlugins/group/kindnet/Localhost 0.12
305 TestNetworkPlugins/group/kindnet/HairPin 0.12
306 TestNetworkPlugins/group/custom-flannel/Start 103.32
307 TestNetworkPlugins/group/enable-default-cni/Start 132.45
308 TestNetworkPlugins/group/calico/ControllerPod 6.12
309 TestNetworkPlugins/group/calico/KubeletFlags 0.32
310 TestNetworkPlugins/group/calico/NetCatPod 11.83
311 TestNetworkPlugins/group/calico/DNS 0.18
312 TestNetworkPlugins/group/calico/Localhost 0.13
313 TestNetworkPlugins/group/calico/HairPin 0.13
314 TestNetworkPlugins/group/flannel/Start 74.55
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
317 TestNetworkPlugins/group/custom-flannel/DNS 0.16
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
320 TestNetworkPlugins/group/bridge/Start 56.01
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestStartStop/group/no-preload/serial/FirstStart 114.81
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
332 TestNetworkPlugins/group/flannel/NetCatPod 10.3
333 TestNetworkPlugins/group/flannel/DNS 0.16
334 TestNetworkPlugins/group/flannel/Localhost 0.13
335 TestNetworkPlugins/group/flannel/HairPin 0.17
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
337 TestNetworkPlugins/group/bridge/NetCatPod 12.05
339 TestStartStop/group/embed-certs/serial/FirstStart 97.74
340 TestNetworkPlugins/group/bridge/DNS 16.62
341 TestNetworkPlugins/group/bridge/Localhost 0.15
342 TestNetworkPlugins/group/bridge/HairPin 0.15
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.34
345 TestStartStop/group/no-preload/serial/DeployApp 13.31
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.29
347 TestStartStop/group/embed-certs/serial/DeployApp 13.29
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
358 TestStartStop/group/no-preload/serial/SecondStart 652.94
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 568.84
361 TestStartStop/group/embed-certs/serial/SecondStart 621.17
362 TestStartStop/group/old-k8s-version/serial/Stop 1.37
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
374 TestStartStop/group/newest-cni/serial/FirstStart 47.07
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
377 TestStartStop/group/newest-cni/serial/Stop 10.45
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
379 TestStartStop/group/newest-cni/serial/SecondStart 36.97
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
383 TestStartStop/group/newest-cni/serial/Pause 4.5
x
+
TestDownloadOnly/v1.20.0/json-events (43.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-765166 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-765166 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (43.611731152s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (43.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1030 18:21:26.904980  389144 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1030 18:21:26.905113  389144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-765166
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-765166: exit status 85 (64.416864ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-765166 | jenkins | v1.34.0 | 30 Oct 24 18:20 UTC |          |
	|         | -p download-only-765166        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:20:43
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:20:43.335660  389156 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:20:43.335799  389156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:20:43.335810  389156 out.go:358] Setting ErrFile to fd 2...
	I1030 18:20:43.335816  389156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:20:43.335987  389156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	W1030 18:20:43.336119  389156 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19883-381834/.minikube/config/config.json: open /home/jenkins/minikube-integration/19883-381834/.minikube/config/config.json: no such file or directory
	I1030 18:20:43.336665  389156 out.go:352] Setting JSON to true
	I1030 18:20:43.338076  389156 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7386,"bootTime":1730305057,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:20:43.338217  389156 start.go:139] virtualization: kvm guest
	I1030 18:20:43.340944  389156 out.go:97] [download-only-765166] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1030 18:20:43.341055  389156 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball: no such file or directory
	I1030 18:20:43.341108  389156 notify.go:220] Checking for updates...
	I1030 18:20:43.342440  389156 out.go:169] MINIKUBE_LOCATION=19883
	I1030 18:20:43.343743  389156 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:20:43.344980  389156 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:20:43.346147  389156 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:20:43.347264  389156 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1030 18:20:43.349305  389156 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1030 18:20:43.349507  389156 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:20:43.380808  389156 out.go:97] Using the kvm2 driver based on user configuration
	I1030 18:20:43.380833  389156 start.go:297] selected driver: kvm2
	I1030 18:20:43.380839  389156 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:20:43.381156  389156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:20:43.381230  389156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:20:43.396134  389156 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:20:43.396204  389156 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:20:43.396738  389156 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1030 18:20:43.396899  389156 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 18:20:43.396961  389156 cni.go:84] Creating CNI manager for ""
	I1030 18:20:43.397034  389156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:20:43.397046  389156 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 18:20:43.397119  389156 start.go:340] cluster config:
	{Name:download-only-765166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-765166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:20:43.397311  389156 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:20:43.399181  389156 out.go:97] Downloading VM boot image ...
	I1030 18:20:43.399234  389156 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1030 18:21:02.328704  389156 out.go:97] Starting "download-only-765166" primary control-plane node in "download-only-765166" cluster
	I1030 18:21:02.328734  389156 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 18:21:02.494939  389156 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1030 18:21:02.494978  389156 cache.go:56] Caching tarball of preloaded images
	I1030 18:21:02.495150  389156 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1030 18:21:02.497186  389156 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1030 18:21:02.497215  389156 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1030 18:21:02.656039  389156 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-765166 host does not exist
	  To start a cluster, run: "minikube start -p download-only-765166"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-765166
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (17.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-293078 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-293078 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.855401587s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (17.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1030 18:21:45.090017  389144 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1030 18:21:45.090067  389144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-293078
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-293078: exit status 85 (64.217036ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-765166 | jenkins | v1.34.0 | 30 Oct 24 18:20 UTC |                     |
	|         | -p download-only-765166        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| delete  | -p download-only-765166        | download-only-765166 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC | 30 Oct 24 18:21 UTC |
	| start   | -o=json --download-only        | download-only-293078 | jenkins | v1.34.0 | 30 Oct 24 18:21 UTC |                     |
	|         | -p download-only-293078        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 18:21:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 18:21:27.276431  389475 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:21:27.276546  389475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:21:27.276555  389475 out.go:358] Setting ErrFile to fd 2...
	I1030 18:21:27.276560  389475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:21:27.276753  389475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:21:27.277363  389475 out.go:352] Setting JSON to true
	I1030 18:21:27.278325  389475 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7430,"bootTime":1730305057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:21:27.278384  389475 start.go:139] virtualization: kvm guest
	I1030 18:21:27.280478  389475 out.go:97] [download-only-293078] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:21:27.280658  389475 notify.go:220] Checking for updates...
	I1030 18:21:27.282132  389475 out.go:169] MINIKUBE_LOCATION=19883
	I1030 18:21:27.283667  389475 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:21:27.284877  389475 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:21:27.286165  389475 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:21:27.287488  389475 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1030 18:21:27.289895  389475 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1030 18:21:27.290120  389475 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:21:27.321464  389475 out.go:97] Using the kvm2 driver based on user configuration
	I1030 18:21:27.321488  389475 start.go:297] selected driver: kvm2
	I1030 18:21:27.321494  389475 start.go:901] validating driver "kvm2" against <nil>
	I1030 18:21:27.321799  389475 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:21:27.321905  389475 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19883-381834/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 18:21:27.336382  389475 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1030 18:21:27.336449  389475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 18:21:27.336958  389475 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1030 18:21:27.337103  389475 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 18:21:27.337138  389475 cni.go:84] Creating CNI manager for ""
	I1030 18:21:27.337202  389475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 18:21:27.337212  389475 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 18:21:27.337281  389475 start.go:340] cluster config:
	{Name:download-only-293078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-293078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:21:27.337387  389475 iso.go:125] acquiring lock: {Name:mk4a5115b605a7a5e9069193daa664d721189792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 18:21:27.339062  389475 out.go:97] Starting "download-only-293078" primary control-plane node in "download-only-293078" cluster
	I1030 18:21:27.339088  389475 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:21:27.598978  389475 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1030 18:21:27.599017  389475 cache.go:56] Caching tarball of preloaded images
	I1030 18:21:27.599171  389475 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1030 18:21:27.601105  389475 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1030 18:21:27.601127  389475 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1030 18:21:27.755727  389475 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19883-381834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-293078 host does not exist
	  To start a cluster, run: "minikube start -p download-only-293078"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-293078
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1030 18:21:45.673148  389144 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-605542 --alsologtostderr --binary-mirror http://127.0.0.1:43099 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-605542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-605542
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (84.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-603900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-603900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.65458227s)
helpers_test.go:175: Cleaning up "offline-crio-603900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-603900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-603900: (1.183979596s)
--- PASS: TestOffline (84.84s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-819803
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-819803: exit status 85 (55.06318ms)

                                                
                                                
-- stdout --
	* Profile "addons-819803" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-819803"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-819803
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-819803: exit status 85 (54.454433ms)

                                                
                                                
-- stdout --
	* Profile "addons-819803" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-819803"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (209.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-819803 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-819803 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m29.915936012s)
--- PASS: TestAddons/Setup (209.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-819803 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-819803 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-819803 get secret gcp-auth -n new-namespace: exit status 1 (80.894163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-819803 logs -l app=gcp-auth -n gcp-auth
I1030 18:25:16.765612  389144 retry.go:31] will retry after 1.701131368s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/10/30 18:25:15 GCP Auth Webhook started!
	2024/10/30 18:25:16 Ready to marshal response ...
	2024/10/30 18:25:16 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-819803 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (13.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-819803 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-819803 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b594df1b-adba-4e23-93cc-29d66c8cf9f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b594df1b-adba-4e23-93cc-29d66c8cf9f1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 13.004358363s
addons_test.go:633: (dbg) Run:  kubectl --context addons-819803 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-819803 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-819803 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (13.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.019414ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-lwc9j" [ac1aec3e-8d69-4d98-875c-68c50389cf77] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003514589s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lhldq" [9edc008f-8004-45b8-a42f-897dcda09957] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.087124962s
addons_test.go:331: (dbg) Run:  kubectl --context addons-819803 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-819803 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-819803 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.74599269s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 ip
2024/10/30 18:25:59 [DEBUG] GET http://192.168.39.211:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.60s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-77gtp" [32c042af-8815-4e80-8906-c833e370309f] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007060921s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 addons disable inspektor-gadget --alsologtostderr -v=1: (5.712333481s)
--- PASS: TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1030 18:26:00.076036  389144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1030 18:26:00.080784  389144 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1030 18:26:00.080806  389144 kapi.go:107] duration metric: took 4.790313ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.798374ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-819803 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-819803 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ec63d7f0-4e78-4b7e-b1c9-4fb6beb50425] Pending
helpers_test.go:344: "task-pv-pod" [ec63d7f0-4e78-4b7e-b1c9-4fb6beb50425] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ec63d7f0-4e78-4b7e-b1c9-4fb6beb50425] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004313817s
addons_test.go:511: (dbg) Run:  kubectl --context addons-819803 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-819803 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-819803 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-819803 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-819803 delete pod task-pv-pod: (1.257522878s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-819803 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-819803 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-819803 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [47448865-4a61-44a4-aed7-421ff4e7d130] Pending
helpers_test.go:344: "task-pv-pod-restore" [47448865-4a61-44a4-aed7-421ff4e7d130] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [47448865-4a61-44a4-aed7-421ff4e7d130] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003797454s
addons_test.go:553: (dbg) Run:  kubectl --context addons-819803 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-819803 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-819803 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.835799669s)
--- PASS: TestAddons/parallel/CSI (64.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-819803 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-mhxrx" [22ae8440-200f-4bcc-8341-1cf946a3c25a] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-mhxrx" [22ae8440-200f-4bcc-8341-1cf946a3c25a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-mhxrx" [22ae8440-200f-4bcc-8341-1cf946a3c25a] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004286925s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 addons disable headlamp --alsologtostderr -v=1: (5.696873196s)
--- PASS: TestAddons/parallel/Headlamp (20.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-v2fjz" [ae3e815f-7258-4b40-a1db-dfd46db7197a] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003809041s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (63.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-819803 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-819803 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [54f69a7e-5ec6-478b-ab0a-7fedab41438e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [54f69a7e-5ec6-478b-ab0a-7fedab41438e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [54f69a7e-5ec6-478b-ab0a-7fedab41438e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.003862329s
addons_test.go:906: (dbg) Run:  kubectl --context addons-819803 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 ssh "cat /opt/local-path-provisioner/pvc-bc29ddce-63c6-4328-8e8f-fb3484c4de83_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-819803 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-819803 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.345181022s)
--- PASS: TestAddons/parallel/LocalPath (63.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s2tw8" [9aca0151-3bc1-4504-b8ba-0e3d70a68fba] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003836253s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.99s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-r5j5v" [b8552125-2510-4ee4-97a5-ae4c9350bcb9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005011887s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-819803 addons disable yakd --alsologtostderr -v=1: (5.839930686s)
--- PASS: TestAddons/parallel/Yakd (10.85s)

                                                
                                    
x
+
TestCertOptions (44.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-602485 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-602485 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (42.816461221s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-602485 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-602485 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-602485 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-602485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-602485
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-602485: (1.0106642s)
--- PASS: TestCertOptions (44.29s)

                                                
                                    
x
+
TestCertExpiration (265.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-910187 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
I1030 19:26:46.458189  389144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1030 19:26:50.312231  389144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1030 19:26:50.346535  389144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1030 19:26:50.346571  389144 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1030 19:26:50.346639  389144 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1030 19:26:50.346671  389144 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1679328959/002/docker-machine-driver-kvm2
I1030 19:26:50.669105  389144 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1679328959/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308040 0x5308040 0x5308040 0x5308040 0x5308040 0x5308040 0x5308040] Decompressors:map[bz2:0xc00060bba0 gz:0xc00060bba8 tar:0xc00060bb20 tar.bz2:0xc00060bb60 tar.gz:0xc00060bb70 tar.xz:0xc00060bb80 tar.zst:0xc00060bb90 tbz2:0xc00060bb60 tgz:0xc00060bb70 txz:0xc00060bb80 tzst:0xc00060bb90 xz:0xc00060bbb0 zip:0xc00060bbc0 zst:0xc00060bbb8] Getters:map[file:0xc001f90470 http:0xc001e82280 https:0xc001e822d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1030 19:26:50.669159  389144 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1679328959/002/docker-machine-driver-kvm2
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-910187 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (53.390766946s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-910187 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-910187 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.718575662s)
helpers_test.go:175: Cleaning up "cert-expiration-910187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-910187
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-910187: (1.00583101s)
--- PASS: TestCertExpiration (265.12s)

                                                
                                    
x
+
TestForceSystemdFlag (60.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-697055 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-697055 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.867759408s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-697055 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-697055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-697055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-697055: (1.043488494s)
--- PASS: TestForceSystemdFlag (60.10s)

                                                
                                    
x
+
TestForceSystemdEnv (96.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-736675 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-736675 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m35.004396819s)
helpers_test.go:175: Cleaning up "force-systemd-env-736675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-736675
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-736675: (1.012476112s)
--- PASS: TestForceSystemdEnv (96.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (11.79s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1030 19:26:41.881625  389144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1030 19:26:41.881791  389144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1030 19:26:41.912170  389144 install.go:62] docker-machine-driver-kvm2: exit status 1
W1030 19:26:41.912534  389144 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1030 19:26:41.912595  389144 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1679328959/001/docker-machine-driver-kvm2
I1030 19:26:42.482232  389144 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1679328959/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308040 0x5308040 0x5308040 0x5308040 0x5308040 0x5308040 0x5308040] Decompressors:map[bz2:0xc00060bba0 gz:0xc00060bba8 tar:0xc00060bb20 tar.bz2:0xc00060bb60 tar.gz:0xc00060bb70 tar.xz:0xc00060bb80 tar.zst:0xc00060bb90 tbz2:0xc00060bb60 tgz:0xc00060bb70 txz:0xc00060bb80 tzst:0xc00060bb90 xz:0xc00060bbb0 zip:0xc00060bbc0 zst:0xc00060bbb8] Getters:map[file:0xc000b02810 http:0xc001e82cd0 https:0xc001e82d20] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1030 19:26:42.482292  389144 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1679328959/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (11.79s)

                                                
                                    
x
+
TestErrorSpam/setup (41.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-398116 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-398116 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-398116 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-398116 --driver=kvm2  --container-runtime=crio: (41.485805625s)
--- PASS: TestErrorSpam/setup (41.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (5.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 stop: (2.317597424s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 stop: (1.79809145s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-398116 --log_dir /tmp/nospam-398116 stop: (1.388239624s)
--- PASS: TestErrorSpam/stop (5.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19883-381834/.minikube/files/etc/test/nested/copy/389144/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683899 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1030 18:35:18.713642  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:18.720059  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:18.731427  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:18.752821  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:18.794288  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:18.875670  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:19.037190  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:19.358951  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:20.001051  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:21.282774  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:23.844589  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:28.965987  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:39.208278  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:35:59.690113  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-683899 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m19.385390076s)
--- PASS: TestFunctional/serial/StartWithProxy (79.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1030 18:36:31.375165  389144 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683899 --alsologtostderr -v=8
E1030 18:36:40.652325  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-683899 --alsologtostderr -v=8: (53.671831748s)
functional_test.go:663: soft start took 53.672600867s for "functional-683899" cluster.
I1030 18:37:25.047425  389144 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (53.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-683899 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 cache add registry.k8s.io/pause:3.1: (1.156075221s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 cache add registry.k8s.io/pause:3.3: (1.2525066s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 cache add registry.k8s.io/pause:latest: (1.11344219s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-683899 /tmp/TestFunctionalserialCacheCmdcacheadd_local2898917088/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cache add minikube-local-cache-test:functional-683899
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 cache add minikube-local-cache-test:functional-683899: (2.539829838s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cache delete minikube-local-cache-test:functional-683899
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-683899
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.164784ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 kubectl -- --context functional-683899 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-683899 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683899 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1030 18:38:02.574986  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-683899 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.617017268s)
functional_test.go:761: restart took 34.617124914s for "functional-683899" cluster.
I1030 18:38:08.493880  389144 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (34.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-683899 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 logs: (1.411736687s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 logs --file /tmp/TestFunctionalserialLogsFileCmd2003062473/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 logs --file /tmp/TestFunctionalserialLogsFileCmd2003062473/001/logs.txt: (1.413679388s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-683899 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-683899
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-683899: exit status 115 (280.951884ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.116:31579 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-683899 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-683899 delete -f testdata/invalidsvc.yaml: (1.861999784s)
--- PASS: TestFunctional/serial/InvalidService (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 config get cpus: exit status 14 (58.799336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 config get cpus: exit status 14 (55.601776ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-683899 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-683899 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 399514: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-683899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.673956ms)

                                                
                                                
-- stdout --
	* [functional-683899] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 18:38:37.695909  399254 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:38:37.696147  399254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:38:37.696156  399254 out.go:358] Setting ErrFile to fd 2...
	I1030 18:38:37.696161  399254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:38:37.696335  399254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:38:37.696868  399254 out.go:352] Setting JSON to false
	I1030 18:38:37.697907  399254 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8461,"bootTime":1730305057,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:38:37.698015  399254 start.go:139] virtualization: kvm guest
	I1030 18:38:37.700138  399254 out.go:177] * [functional-683899] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 18:38:37.701637  399254 notify.go:220] Checking for updates...
	I1030 18:38:37.701661  399254 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:38:37.702955  399254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:38:37.704337  399254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:38:37.705552  399254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:38:37.706772  399254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:38:37.708035  399254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:38:37.709573  399254 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:38:37.709933  399254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:38:37.710000  399254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:38:37.725159  399254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I1030 18:38:37.725589  399254 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:38:37.726063  399254 main.go:141] libmachine: Using API Version  1
	I1030 18:38:37.726085  399254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:38:37.726377  399254 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:38:37.726563  399254 main.go:141] libmachine: (functional-683899) Calling .DriverName
	I1030 18:38:37.726778  399254 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:38:37.727062  399254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:38:37.727100  399254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:38:37.742435  399254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I1030 18:38:37.742851  399254 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:38:37.743384  399254 main.go:141] libmachine: Using API Version  1
	I1030 18:38:37.743404  399254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:38:37.743782  399254 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:38:37.743971  399254 main.go:141] libmachine: (functional-683899) Calling .DriverName
	I1030 18:38:37.778791  399254 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 18:38:37.780340  399254 start.go:297] selected driver: kvm2
	I1030 18:38:37.780362  399254 start.go:901] validating driver "kvm2" against &{Name:functional-683899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-683899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:38:37.780495  399254 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:38:37.782541  399254 out.go:201] 
	W1030 18:38:37.783763  399254 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1030 18:38:37.784980  399254 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683899 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-683899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-683899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.409ms)

                                                
                                                
-- stdout --
	* [functional-683899] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 18:38:17.650345  397444 out.go:345] Setting OutFile to fd 1 ...
	I1030 18:38:17.650454  397444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:38:17.650464  397444 out.go:358] Setting ErrFile to fd 2...
	I1030 18:38:17.650468  397444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 18:38:17.650767  397444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 18:38:17.651304  397444 out.go:352] Setting JSON to false
	I1030 18:38:17.652242  397444 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8441,"bootTime":1730305057,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 18:38:17.652350  397444 start.go:139] virtualization: kvm guest
	I1030 18:38:17.654915  397444 out.go:177] * [functional-683899] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1030 18:38:17.656686  397444 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 18:38:17.656729  397444 notify.go:220] Checking for updates...
	I1030 18:38:17.659766  397444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 18:38:17.661155  397444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 18:38:17.662580  397444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 18:38:17.663852  397444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 18:38:17.665126  397444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 18:38:17.666876  397444 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 18:38:17.667520  397444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:38:17.667609  397444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:38:17.682689  397444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I1030 18:38:17.683171  397444 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:38:17.683698  397444 main.go:141] libmachine: Using API Version  1
	I1030 18:38:17.683719  397444 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:38:17.684045  397444 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:38:17.684237  397444 main.go:141] libmachine: (functional-683899) Calling .DriverName
	I1030 18:38:17.684507  397444 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 18:38:17.684797  397444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 18:38:17.684837  397444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 18:38:17.699344  397444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37951
	I1030 18:38:17.699713  397444 main.go:141] libmachine: () Calling .GetVersion
	I1030 18:38:17.700164  397444 main.go:141] libmachine: Using API Version  1
	I1030 18:38:17.700186  397444 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 18:38:17.700499  397444 main.go:141] libmachine: () Calling .GetMachineName
	I1030 18:38:17.700684  397444 main.go:141] libmachine: (functional-683899) Calling .DriverName
	I1030 18:38:17.735468  397444 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1030 18:38:17.736929  397444 start.go:297] selected driver: kvm2
	I1030 18:38:17.736961  397444 start.go:901] validating driver "kvm2" against &{Name:functional-683899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-683899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 18:38:17.737068  397444 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 18:38:17.739288  397444 out.go:201] 
	W1030 18:38:17.740833  397444 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1030 18:38:17.742228  397444 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-683899 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-683899 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5qnm6" [12604ba8-ca2e-4100-a15c-d72acc750a63] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5qnm6" [12604ba8-ca2e-4100-a15c-d72acc750a63] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003961857s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.116:30322
functional_test.go:1675: http://192.168.39.116:30322: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5qnm6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.116:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.116:30322
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5b1e69f5-6379-44b2-9f9d-051ade8ac091] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004429945s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-683899 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-683899 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-683899 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-683899 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eff91057-daff-480d-be1b-4091fcc3c299] Pending
helpers_test.go:344: "sp-pod" [eff91057-daff-480d-be1b-4091fcc3c299] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eff91057-daff-480d-be1b-4091fcc3c299] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.005462025s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-683899 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-683899 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-683899 delete -f testdata/storage-provisioner/pod.yaml: (2.869211822s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-683899 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [99fb40c7-b5fb-4be5-9282-d07c2b10d703] Pending
helpers_test.go:344: "sp-pod" [99fb40c7-b5fb-4be5-9282-d07c2b10d703] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [99fb40c7-b5fb-4be5-9282-d07c2b10d703] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.0044807s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-683899 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh -n functional-683899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cp functional-683899:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2975742442/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh -n functional-683899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh -n functional-683899 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-683899 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8qjz8" [1c634ee9-3308-467b-b90a-301b37b88371] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8qjz8" [1c634ee9-3308-467b-b90a-301b37b88371] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.007606516s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-683899 exec mysql-6cdb49bbb-8qjz8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-683899 exec mysql-6cdb49bbb-8qjz8 -- mysql -ppassword -e "show databases;": exit status 1 (239.598522ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1030 18:38:56.921561  389144 retry.go:31] will retry after 1.066923837s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-683899 exec mysql-6cdb49bbb-8qjz8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-683899 exec mysql-6cdb49bbb-8qjz8 -- mysql -ppassword -e "show databases;": exit status 1 (180.420514ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1030 18:38:58.169305  389144 retry.go:31] will retry after 1.213063589s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-683899 exec mysql-6cdb49bbb-8qjz8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-683899 exec mysql-6cdb49bbb-8qjz8 -- mysql -ppassword -e "show databases;": exit status 1 (132.253245ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1030 18:38:59.514940  389144 retry.go:31] will retry after 1.985186695s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-683899 exec mysql-6cdb49bbb-8qjz8 -- mysql -ppassword -e "show databases;"
2024/10/30 18:39:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (30.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/389144/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo cat /etc/test/nested/copy/389144/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/389144.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo cat /etc/ssl/certs/389144.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/389144.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo cat /usr/share/ca-certificates/389144.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3891442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo cat /etc/ssl/certs/3891442.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3891442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo cat /usr/share/ca-certificates/3891442.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-683899 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh "sudo systemctl is-active docker": exit status 1 (214.155075ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh "sudo systemctl is-active containerd": exit status 1 (249.94186ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "313.92762ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "55.150434ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "278.505543ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "60.830813ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdany-port1422226033/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730313497748282548" to /tmp/TestFunctionalparallelMountCmdany-port1422226033/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730313497748282548" to /tmp/TestFunctionalparallelMountCmdany-port1422226033/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730313497748282548" to /tmp/TestFunctionalparallelMountCmdany-port1422226033/001/test-1730313497748282548
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.634774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1030 18:38:17.953315  389144 retry.go:31] will retry after 681.974679ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 30 18:38 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 30 18:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 30 18:38 test-1730313497748282548
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh cat /mount-9p/test-1730313497748282548
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-683899 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3bb468f9-cbb8-434a-ac34-e77d83073396] Pending
helpers_test.go:344: "busybox-mount" [3bb468f9-cbb8-434a-ac34-e77d83073396] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3bb468f9-cbb8-434a-ac34-e77d83073396] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3bb468f9-cbb8-434a-ac34-e77d83073396] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.003923321s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-683899 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdany-port1422226033/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683899 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-683899
localhost/kicbase/echo-server:functional-683899
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683899 image ls --format short --alsologtostderr:
I1030 18:38:39.048607  399465 out.go:345] Setting OutFile to fd 1 ...
I1030 18:38:39.048855  399465 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:39.048863  399465 out.go:358] Setting ErrFile to fd 2...
I1030 18:38:39.048867  399465 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:39.049041  399465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
I1030 18:38:39.049589  399465 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:39.049686  399465 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:39.050018  399465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:39.050089  399465 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:39.066055  399465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
I1030 18:38:39.066629  399465 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:39.067954  399465 main.go:141] libmachine: Using API Version  1
I1030 18:38:39.068007  399465 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:39.068709  399465 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:39.068933  399465 main.go:141] libmachine: (functional-683899) Calling .GetState
I1030 18:38:39.070753  399465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:39.070796  399465 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:39.086355  399465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
I1030 18:38:39.086815  399465 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:39.087373  399465 main.go:141] libmachine: Using API Version  1
I1030 18:38:39.087394  399465 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:39.087752  399465 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:39.087969  399465 main.go:141] libmachine: (functional-683899) Calling .DriverName
I1030 18:38:39.088206  399465 ssh_runner.go:195] Run: systemctl --version
I1030 18:38:39.088233  399465 main.go:141] libmachine: (functional-683899) Calling .GetSSHHostname
I1030 18:38:39.091172  399465 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:39.091586  399465 main.go:141] libmachine: (functional-683899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:d5:64", ip: ""} in network mk-functional-683899: {Iface:virbr1 ExpiryTime:2024-10-30 19:35:26 +0000 UTC Type:0 Mac:52:54:00:c9:d5:64 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-683899 Clientid:01:52:54:00:c9:d5:64}
I1030 18:38:39.091616  399465 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined IP address 192.168.39.116 and MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:39.091801  399465 main.go:141] libmachine: (functional-683899) Calling .GetSSHPort
I1030 18:38:39.091984  399465 main.go:141] libmachine: (functional-683899) Calling .GetSSHKeyPath
I1030 18:38:39.092134  399465 main.go:141] libmachine: (functional-683899) Calling .GetSSHUsername
I1030 18:38:39.092282  399465 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/functional-683899/id_rsa Username:docker}
I1030 18:38:39.259984  399465 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 18:38:39.658004  399465 main.go:141] libmachine: Making call to close driver server
I1030 18:38:39.658023  399465 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:39.658370  399465 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:39.658392  399465 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 18:38:39.658402  399465 main.go:141] libmachine: Making call to close driver server
I1030 18:38:39.658411  399465 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:39.658779  399465 main.go:141] libmachine: (functional-683899) DBG | Closing plugin on server side
I1030 18:38:39.658783  399465 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:39.658800  399465 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683899 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-683899  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-683899  | f7c7e7ed48da2 | 3.33kB |
| localhost/my-image                      | functional-683899  | e5e9f5718762d | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683899 image ls --format table --alsologtostderr:
I1030 18:38:52.256114  399710 out.go:345] Setting OutFile to fd 1 ...
I1030 18:38:52.256547  399710 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:52.256605  399710 out.go:358] Setting ErrFile to fd 2...
I1030 18:38:52.256622  399710 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:52.257056  399710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
I1030 18:38:52.258304  399710 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:52.258412  399710 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:52.258807  399710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:52.258858  399710 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:52.274908  399710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
I1030 18:38:52.275401  399710 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:52.275982  399710 main.go:141] libmachine: Using API Version  1
I1030 18:38:52.276010  399710 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:52.276403  399710 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:52.276586  399710 main.go:141] libmachine: (functional-683899) Calling .GetState
I1030 18:38:52.278409  399710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:52.278448  399710 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:52.292828  399710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
I1030 18:38:52.293303  399710 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:52.293905  399710 main.go:141] libmachine: Using API Version  1
I1030 18:38:52.293943  399710 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:52.294338  399710 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:52.294546  399710 main.go:141] libmachine: (functional-683899) Calling .DriverName
I1030 18:38:52.294759  399710 ssh_runner.go:195] Run: systemctl --version
I1030 18:38:52.294789  399710 main.go:141] libmachine: (functional-683899) Calling .GetSSHHostname
I1030 18:38:52.297286  399710 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:52.297675  399710 main.go:141] libmachine: (functional-683899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:d5:64", ip: ""} in network mk-functional-683899: {Iface:virbr1 ExpiryTime:2024-10-30 19:35:26 +0000 UTC Type:0 Mac:52:54:00:c9:d5:64 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-683899 Clientid:01:52:54:00:c9:d5:64}
I1030 18:38:52.297705  399710 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined IP address 192.168.39.116 and MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:52.297824  399710 main.go:141] libmachine: (functional-683899) Calling .GetSSHPort
I1030 18:38:52.297992  399710 main.go:141] libmachine: (functional-683899) Calling .GetSSHKeyPath
I1030 18:38:52.298126  399710 main.go:141] libmachine: (functional-683899) Calling .GetSSHUsername
I1030 18:38:52.298246  399710 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/functional-683899/id_rsa Username:docker}
I1030 18:38:52.373388  399710 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 18:38:52.413661  399710 main.go:141] libmachine: Making call to close driver server
I1030 18:38:52.413682  399710 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:52.413975  399710 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:52.413999  399710 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 18:38:52.414014  399710 main.go:141] libmachine: Making call to close driver server
I1030 18:38:52.414026  399710 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:52.414018  399710 main.go:141] libmachine: (functional-683899) DBG | Closing plugin on server side
I1030 18:38:52.414308  399710 main.go:141] libmachine: (functional-683899) DBG | Closing plugin on server side
I1030 18:38:52.414376  399710 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:52.414396  399710 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683899 image ls --format json --alsologtostderr:
[{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-683899"],"size":"4943877"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"847c7bc1a54
1865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bb
c1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"f7c7e7ed48da20f91ded80a94000fdc577046f5de83d38c132f814dbffa74d44","repoDigests":["localhost/minikube-local-cache-test@sha256:5dad3e6a563302e9a3869a61e36d4adfe605ee9202a6c3f5f32366f35134003a"],"repoTags":["localhost/minikube-local-cache-test:functional-683899"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","
repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989
956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122
965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"ba13b058edb41df3fd6032ab45c7d2322d5718c16998b1ee20ef415fc9a86be6","repoDigests":["docker.io/library/21472b3f53d4eadaeeb4d30911c22146eb85da2847fc0a2b432f9da76c20b4b3-tmp@sha256:f543fead81b8c9a7401e812aaaf71d42910ee56c75e432751ea466ad65f224f3"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","g
cr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e5e9f5718762d2ae2ba58e1e7cb89b889468c6cf97a615ec7a92c33840d86e26","repoDigests":["localhost/my-image@sha256:160c5ffb66c1e42d218c28313dc22510aef5e3dc6df836e34701bfdea0a29958"],"repoTags":["localhost/my-image:functional-683899"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683899 image ls --format json --alsologtostderr:
I1030 18:38:51.908519  399686 out.go:345] Setting OutFile to fd 1 ...
I1030 18:38:51.908667  399686 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:51.908677  399686 out.go:358] Setting ErrFile to fd 2...
I1030 18:38:51.908682  399686 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:51.908909  399686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
I1030 18:38:51.909558  399686 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:51.909680  399686 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:51.910072  399686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:51.910133  399686 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:51.924871  399686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45959
I1030 18:38:51.925451  399686 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:51.926081  399686 main.go:141] libmachine: Using API Version  1
I1030 18:38:51.926117  399686 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:51.926453  399686 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:51.926678  399686 main.go:141] libmachine: (functional-683899) Calling .GetState
I1030 18:38:51.928536  399686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:51.928578  399686 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:51.942729  399686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41477
I1030 18:38:51.943256  399686 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:51.943808  399686 main.go:141] libmachine: Using API Version  1
I1030 18:38:51.943833  399686 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:51.944153  399686 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:51.944358  399686 main.go:141] libmachine: (functional-683899) Calling .DriverName
I1030 18:38:51.944559  399686 ssh_runner.go:195] Run: systemctl --version
I1030 18:38:51.944592  399686 main.go:141] libmachine: (functional-683899) Calling .GetSSHHostname
I1030 18:38:51.947436  399686 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:51.947835  399686 main.go:141] libmachine: (functional-683899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:d5:64", ip: ""} in network mk-functional-683899: {Iface:virbr1 ExpiryTime:2024-10-30 19:35:26 +0000 UTC Type:0 Mac:52:54:00:c9:d5:64 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-683899 Clientid:01:52:54:00:c9:d5:64}
I1030 18:38:51.947862  399686 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined IP address 192.168.39.116 and MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:51.948032  399686 main.go:141] libmachine: (functional-683899) Calling .GetSSHPort
I1030 18:38:51.948207  399686 main.go:141] libmachine: (functional-683899) Calling .GetSSHKeyPath
I1030 18:38:51.948375  399686 main.go:141] libmachine: (functional-683899) Calling .GetSSHUsername
I1030 18:38:51.948476  399686 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/functional-683899/id_rsa Username:docker}
I1030 18:38:52.029011  399686 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 18:38:52.068533  399686 main.go:141] libmachine: Making call to close driver server
I1030 18:38:52.068546  399686 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:52.068846  399686 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:52.068856  399686 main.go:141] libmachine: (functional-683899) DBG | Closing plugin on server side
I1030 18:38:52.068869  399686 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 18:38:52.068896  399686 main.go:141] libmachine: Making call to close driver server
I1030 18:38:52.068906  399686 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:52.069130  399686 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:52.069146  399686 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 18:38:52.069204  399686 main.go:141] libmachine: (functional-683899) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683899 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: f7c7e7ed48da20f91ded80a94000fdc577046f5de83d38c132f814dbffa74d44
repoDigests:
- localhost/minikube-local-cache-test@sha256:5dad3e6a563302e9a3869a61e36d4adfe605ee9202a6c3f5f32366f35134003a
repoTags:
- localhost/minikube-local-cache-test:functional-683899
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-683899
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683899 image ls --format yaml --alsologtostderr:
I1030 18:38:39.717803  399489 out.go:345] Setting OutFile to fd 1 ...
I1030 18:38:39.717923  399489 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:39.717935  399489 out.go:358] Setting ErrFile to fd 2...
I1030 18:38:39.717942  399489 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:39.718125  399489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
I1030 18:38:39.718742  399489 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:39.718860  399489 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:39.719267  399489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:39.719323  399489 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:39.736000  399489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40371
I1030 18:38:39.736640  399489 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:39.737341  399489 main.go:141] libmachine: Using API Version  1
I1030 18:38:39.737369  399489 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:39.737820  399489 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:39.738029  399489 main.go:141] libmachine: (functional-683899) Calling .GetState
I1030 18:38:39.740105  399489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:39.740155  399489 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:39.755786  399489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
I1030 18:38:39.756254  399489 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:39.756807  399489 main.go:141] libmachine: Using API Version  1
I1030 18:38:39.756837  399489 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:39.757181  399489 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:39.757372  399489 main.go:141] libmachine: (functional-683899) Calling .DriverName
I1030 18:38:39.757590  399489 ssh_runner.go:195] Run: systemctl --version
I1030 18:38:39.757622  399489 main.go:141] libmachine: (functional-683899) Calling .GetSSHHostname
I1030 18:38:39.760660  399489 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:39.761102  399489 main.go:141] libmachine: (functional-683899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:d5:64", ip: ""} in network mk-functional-683899: {Iface:virbr1 ExpiryTime:2024-10-30 19:35:26 +0000 UTC Type:0 Mac:52:54:00:c9:d5:64 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-683899 Clientid:01:52:54:00:c9:d5:64}
I1030 18:38:39.761167  399489 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined IP address 192.168.39.116 and MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:39.761195  399489 main.go:141] libmachine: (functional-683899) Calling .GetSSHPort
I1030 18:38:39.761368  399489 main.go:141] libmachine: (functional-683899) Calling .GetSSHKeyPath
I1030 18:38:39.761510  399489 main.go:141] libmachine: (functional-683899) Calling .GetSSHUsername
I1030 18:38:39.761646  399489 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/functional-683899/id_rsa Username:docker}
I1030 18:38:39.945925  399489 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 18:38:40.302346  399489 main.go:141] libmachine: Making call to close driver server
I1030 18:38:40.302364  399489 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:40.302679  399489 main.go:141] libmachine: (functional-683899) DBG | Closing plugin on server side
I1030 18:38:40.302695  399489 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:40.302711  399489 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 18:38:40.302720  399489 main.go:141] libmachine: Making call to close driver server
I1030 18:38:40.302730  399489 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:40.302997  399489 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:40.303017  399489 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (11.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh pgrep buildkitd: exit status 1 (254.260631ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image build -t localhost/my-image:functional-683899 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 image build -t localhost/my-image:functional-683899 testdata/build --alsologtostderr: (11.039611107s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-683899 image build -t localhost/my-image:functional-683899 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ba13b058edb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-683899
--> e5e9f571876
Successfully tagged localhost/my-image:functional-683899
e5e9f5718762d2ae2ba58e1e7cb89b889468c6cf97a615ec7a92c33840d86e26
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-683899 image build -t localhost/my-image:functional-683899 testdata/build --alsologtostderr:
I1030 18:38:40.610600  399568 out.go:345] Setting OutFile to fd 1 ...
I1030 18:38:40.610858  399568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:40.610868  399568 out.go:358] Setting ErrFile to fd 2...
I1030 18:38:40.610874  399568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 18:38:40.611081  399568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
I1030 18:38:40.611693  399568 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:40.612434  399568 config.go:182] Loaded profile config "functional-683899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1030 18:38:40.613070  399568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:40.613129  399568 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:40.629353  399568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
I1030 18:38:40.629895  399568 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:40.630604  399568 main.go:141] libmachine: Using API Version  1
I1030 18:38:40.630633  399568 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:40.631007  399568 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:40.631227  399568 main.go:141] libmachine: (functional-683899) Calling .GetState
I1030 18:38:40.633095  399568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 18:38:40.633133  399568 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 18:38:40.648526  399568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
I1030 18:38:40.648961  399568 main.go:141] libmachine: () Calling .GetVersion
I1030 18:38:40.649574  399568 main.go:141] libmachine: Using API Version  1
I1030 18:38:40.649597  399568 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 18:38:40.649926  399568 main.go:141] libmachine: () Calling .GetMachineName
I1030 18:38:40.650100  399568 main.go:141] libmachine: (functional-683899) Calling .DriverName
I1030 18:38:40.650340  399568 ssh_runner.go:195] Run: systemctl --version
I1030 18:38:40.650369  399568 main.go:141] libmachine: (functional-683899) Calling .GetSSHHostname
I1030 18:38:40.653226  399568 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:40.653693  399568 main.go:141] libmachine: (functional-683899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:d5:64", ip: ""} in network mk-functional-683899: {Iface:virbr1 ExpiryTime:2024-10-30 19:35:26 +0000 UTC Type:0 Mac:52:54:00:c9:d5:64 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-683899 Clientid:01:52:54:00:c9:d5:64}
I1030 18:38:40.653712  399568 main.go:141] libmachine: (functional-683899) DBG | domain functional-683899 has defined IP address 192.168.39.116 and MAC address 52:54:00:c9:d5:64 in network mk-functional-683899
I1030 18:38:40.653886  399568 main.go:141] libmachine: (functional-683899) Calling .GetSSHPort
I1030 18:38:40.654058  399568 main.go:141] libmachine: (functional-683899) Calling .GetSSHKeyPath
I1030 18:38:40.654191  399568 main.go:141] libmachine: (functional-683899) Calling .GetSSHUsername
I1030 18:38:40.654335  399568 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/functional-683899/id_rsa Username:docker}
I1030 18:38:40.783406  399568 build_images.go:161] Building image from path: /tmp/build.3743764676.tar
I1030 18:38:40.783489  399568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1030 18:38:40.837999  399568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3743764676.tar
I1030 18:38:40.858389  399568 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3743764676.tar: stat -c "%s %y" /var/lib/minikube/build/build.3743764676.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3743764676.tar': No such file or directory
I1030 18:38:40.858443  399568 ssh_runner.go:362] scp /tmp/build.3743764676.tar --> /var/lib/minikube/build/build.3743764676.tar (3072 bytes)
I1030 18:38:40.903409  399568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3743764676
I1030 18:38:40.926043  399568 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3743764676 -xf /var/lib/minikube/build/build.3743764676.tar
I1030 18:38:40.955744  399568 crio.go:315] Building image: /var/lib/minikube/build/build.3743764676
I1030 18:38:40.955829  399568 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-683899 /var/lib/minikube/build/build.3743764676 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1030 18:38:51.542382  399568 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-683899 /var/lib/minikube/build/build.3743764676 --cgroup-manager=cgroupfs: (10.58652015s)
I1030 18:38:51.542508  399568 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3743764676
I1030 18:38:51.575419  399568 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3743764676.tar
I1030 18:38:51.595329  399568 build_images.go:217] Built localhost/my-image:functional-683899 from /tmp/build.3743764676.tar
I1030 18:38:51.595376  399568 build_images.go:133] succeeded building to: functional-683899
I1030 18:38:51.595383  399568 build_images.go:134] failed building to: 
I1030 18:38:51.595429  399568 main.go:141] libmachine: Making call to close driver server
I1030 18:38:51.595445  399568 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:51.595781  399568 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:51.595802  399568 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 18:38:51.595811  399568 main.go:141] libmachine: Making call to close driver server
I1030 18:38:51.595818  399568 main.go:141] libmachine: (functional-683899) Calling .Close
I1030 18:38:51.596947  399568 main.go:141] libmachine: (functional-683899) DBG | Closing plugin on server side
I1030 18:38:51.596953  399568 main.go:141] libmachine: Successfully made call to close driver server
I1030 18:38:51.596969  399568 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (11.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (3.216762265s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-683899
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image load --daemon kicbase/echo-server:functional-683899 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-683899 image load --daemon kicbase/echo-server:functional-683899 --alsologtostderr: (1.193936314s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image load --daemon kicbase/echo-server:functional-683899 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.325614157s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-683899
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image load --daemon kicbase/echo-server:functional-683899 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image save kicbase/echo-server:functional-683899 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image rm kicbase/echo-server:functional-683899 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-683899
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 image save --daemon kicbase/echo-server:functional-683899 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-683899
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-683899 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-683899 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-clchg" [aab112f1-4657-4629-9b09-606d9f4f9f27] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-clchg" [aab112f1-4657-4629-9b09-606d9f4f9f27] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005579628s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdspecific-port3058775510/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.515223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1030 18:38:33.923403  389144 retry.go:31] will retry after 643.714875ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdspecific-port3058775510/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh "sudo umount -f /mount-9p": exit status 1 (199.707075ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-683899 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdspecific-port3058775510/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup873261229/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup873261229/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup873261229/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T" /mount1: exit status 1 (203.840238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1030 18:38:35.997235  389144 retry.go:31] will retry after 519.839182ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-683899 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup873261229/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup873261229/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-683899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup873261229/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 service list -o json
functional_test.go:1494: Took "421.628798ms" to run "out/minikube-linux-amd64 -p functional-683899 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.116:30761
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-683899 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.116:30761
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-683899
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-683899
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-683899
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174833 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1030 18:40:18.709528  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:40:46.418692  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-174833 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.899835885s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (11.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-174833 -- rollout status deployment/busybox: (8.818592402s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-mm586 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-rzbbm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-v6kn9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-mm586 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-rzbbm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-v6kn9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-mm586 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-rzbbm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-v6kn9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (11.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-mm586 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-mm586 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-rzbbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-rzbbm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-v6kn9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174833 -- exec busybox-7dff88458-v6kn9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174833 -v=7 --alsologtostderr
E1030 18:43:17.243702  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:17.250210  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:17.261754  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:17.283273  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:17.324777  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:17.406326  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:17.567921  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:17.889396  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:18.531751  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:19.813400  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:22.374943  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:27.496572  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 18:43:37.738544  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174833 -v=7 --alsologtostderr: (56.89455552s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-174833 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp testdata/cp-test.txt ha-174833:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833:/home/docker/cp-test.txt ha-174833-m02:/home/docker/cp-test_ha-174833_ha-174833-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test_ha-174833_ha-174833-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833:/home/docker/cp-test.txt ha-174833-m03:/home/docker/cp-test_ha-174833_ha-174833-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test_ha-174833_ha-174833-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833:/home/docker/cp-test.txt ha-174833-m04:/home/docker/cp-test_ha-174833_ha-174833-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test_ha-174833_ha-174833-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp testdata/cp-test.txt ha-174833-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m02:/home/docker/cp-test.txt ha-174833:/home/docker/cp-test_ha-174833-m02_ha-174833.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test_ha-174833-m02_ha-174833.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m02:/home/docker/cp-test.txt ha-174833-m03:/home/docker/cp-test_ha-174833-m02_ha-174833-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test_ha-174833-m02_ha-174833-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m02:/home/docker/cp-test.txt ha-174833-m04:/home/docker/cp-test_ha-174833-m02_ha-174833-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test_ha-174833-m02_ha-174833-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp testdata/cp-test.txt ha-174833-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt ha-174833:/home/docker/cp-test_ha-174833-m03_ha-174833.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test_ha-174833-m03_ha-174833.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt ha-174833-m02:/home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test_ha-174833-m03_ha-174833-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m03:/home/docker/cp-test.txt ha-174833-m04:/home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test_ha-174833-m03_ha-174833-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp testdata/cp-test.txt ha-174833-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983504479/001/cp-test_ha-174833-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt ha-174833:/home/docker/cp-test_ha-174833-m04_ha-174833.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833 "sudo cat /home/docker/cp-test_ha-174833-m04_ha-174833.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt ha-174833-m02:/home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m02 "sudo cat /home/docker/cp-test_ha-174833-m04_ha-174833-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 cp ha-174833-m04:/home/docker/cp-test.txt ha-174833-m03:/home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 ssh -n ha-174833-m03 "sudo cat /home/docker/cp-test_ha-174833-m04_ha-174833-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-174833 node delete m03 -v=7 --alsologtostderr: (15.960187154s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (217.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174833 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1030 18:55:18.710164  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-174833 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m36.605122995s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (217.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174833 --control-plane -v=7 --alsologtostderr
E1030 18:58:17.243488  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174833 --control-plane -v=7 --alsologtostderr: (1m19.663818531s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-174833 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-040703 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1030 18:59:40.308565  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-040703 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.012075722s)
--- PASS: TestJSONOutput/start/Command (55.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-040703 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-040703 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-040703 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-040703 --output=json --user=testUser: (7.337173564s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-197644 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-197644 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.38317ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cfdf4037-dd7f-4835-aba3-4888bd706e7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-197644] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d093b10-7206-4a84-9a25-ab68aa26929a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19883"}}
	{"specversion":"1.0","id":"bbe37ead-d7fb-412c-a3aa-e88733f7554d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dfb23cb8-ebc0-425e-a75b-9311c65afb94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig"}}
	{"specversion":"1.0","id":"8844f2fd-d484-4e0d-9bc0-c914bd5416d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube"}}
	{"specversion":"1.0","id":"4caf65b7-bd7d-4845-add1-69af8fa94ae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3c7fbb26-d149-430b-952b-8a555721a2a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c8b920b9-4710-46b7-8736-e9b570fee9d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-197644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-197644
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-211463 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-211463 --driver=kvm2  --container-runtime=crio: (43.10894717s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-222643 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-222643 --driver=kvm2  --container-runtime=crio: (42.270002165s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-211463
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-222643
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-222643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-222643
helpers_test.go:175: Cleaning up "first-211463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-211463
--- PASS: TestMinikubeProfile (88.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-200933 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-200933 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.998778672s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-200933 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-200933 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-222283 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-222283 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.281896225s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222283 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222283 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-200933 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222283 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222283 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-222283
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-222283: (1.278449013s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-222283
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-222283: (22.708495504s)
--- PASS: TestMountStart/serial/RestartStopped (23.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222283 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222283 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-743795 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1030 19:03:17.243134  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-743795 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.451024746s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-743795 -- rollout status deployment/busybox: (7.14796931s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-f4hq8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-k795m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-f4hq8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-k795m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-f4hq8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-k795m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-f4hq8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-f4hq8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-k795m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-743795 -- exec busybox-7dff88458-k795m -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-743795 -v 3 --alsologtostderr
E1030 19:05:18.709214  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-743795 -v 3 --alsologtostderr: (53.930858256s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-743795 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp testdata/cp-test.txt multinode-743795:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1456195063/001/cp-test_multinode-743795.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795:/home/docker/cp-test.txt multinode-743795-m02:/home/docker/cp-test_multinode-743795_multinode-743795-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m02 "sudo cat /home/docker/cp-test_multinode-743795_multinode-743795-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795:/home/docker/cp-test.txt multinode-743795-m03:/home/docker/cp-test_multinode-743795_multinode-743795-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m03 "sudo cat /home/docker/cp-test_multinode-743795_multinode-743795-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp testdata/cp-test.txt multinode-743795-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1456195063/001/cp-test_multinode-743795-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt multinode-743795:/home/docker/cp-test_multinode-743795-m02_multinode-743795.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795 "sudo cat /home/docker/cp-test_multinode-743795-m02_multinode-743795.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795-m02:/home/docker/cp-test.txt multinode-743795-m03:/home/docker/cp-test_multinode-743795-m02_multinode-743795-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m03 "sudo cat /home/docker/cp-test_multinode-743795-m02_multinode-743795-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp testdata/cp-test.txt multinode-743795-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1456195063/001/cp-test_multinode-743795-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt multinode-743795:/home/docker/cp-test_multinode-743795-m03_multinode-743795.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795 "sudo cat /home/docker/cp-test_multinode-743795-m03_multinode-743795.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 cp multinode-743795-m03:/home/docker/cp-test.txt multinode-743795-m02:/home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 ssh -n multinode-743795-m02 "sudo cat /home/docker/cp-test_multinode-743795-m03_multinode-743795-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-743795 node stop m03: (1.452342779s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-743795 status: exit status 7 (424.852073ms)

                                                
                                                
-- stdout --
	multinode-743795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-743795-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-743795-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr: exit status 7 (424.993335ms)

                                                
                                                
-- stdout --
	multinode-743795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-743795-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-743795-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:06:21.078691  416186 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:06:21.078827  416186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:06:21.078838  416186 out.go:358] Setting ErrFile to fd 2...
	I1030 19:06:21.078844  416186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:06:21.079130  416186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:06:21.079352  416186 out.go:352] Setting JSON to false
	I1030 19:06:21.079386  416186 mustload.go:65] Loading cluster: multinode-743795
	I1030 19:06:21.079517  416186 notify.go:220] Checking for updates...
	I1030 19:06:21.079942  416186 config.go:182] Loaded profile config "multinode-743795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:06:21.079970  416186 status.go:174] checking status of multinode-743795 ...
	I1030 19:06:21.080564  416186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:06:21.080610  416186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:06:21.096052  416186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I1030 19:06:21.096530  416186 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:06:21.097100  416186 main.go:141] libmachine: Using API Version  1
	I1030 19:06:21.097130  416186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:06:21.097480  416186 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:06:21.097656  416186 main.go:141] libmachine: (multinode-743795) Calling .GetState
	I1030 19:06:21.099252  416186 status.go:371] multinode-743795 host status = "Running" (err=<nil>)
	I1030 19:06:21.099269  416186 host.go:66] Checking if "multinode-743795" exists ...
	I1030 19:06:21.099545  416186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:06:21.099589  416186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:06:21.115200  416186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I1030 19:06:21.115569  416186 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:06:21.116020  416186 main.go:141] libmachine: Using API Version  1
	I1030 19:06:21.116043  416186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:06:21.116417  416186 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:06:21.116595  416186 main.go:141] libmachine: (multinode-743795) Calling .GetIP
	I1030 19:06:21.119513  416186 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:06:21.119935  416186 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:06:21.119993  416186 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:06:21.120063  416186 host.go:66] Checking if "multinode-743795" exists ...
	I1030 19:06:21.120365  416186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:06:21.120408  416186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:06:21.135734  416186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I1030 19:06:21.136280  416186 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:06:21.136753  416186 main.go:141] libmachine: Using API Version  1
	I1030 19:06:21.136772  416186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:06:21.137134  416186 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:06:21.137326  416186 main.go:141] libmachine: (multinode-743795) Calling .DriverName
	I1030 19:06:21.137520  416186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1030 19:06:21.137541  416186 main.go:141] libmachine: (multinode-743795) Calling .GetSSHHostname
	I1030 19:06:21.140070  416186 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:06:21.140510  416186 main.go:141] libmachine: (multinode-743795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:70:0e", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:03:25 +0000 UTC Type:0 Mac:52:54:00:97:70:0e Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-743795 Clientid:01:52:54:00:97:70:0e}
	I1030 19:06:21.140537  416186 main.go:141] libmachine: (multinode-743795) DBG | domain multinode-743795 has defined IP address 192.168.39.241 and MAC address 52:54:00:97:70:0e in network mk-multinode-743795
	I1030 19:06:21.140737  416186 main.go:141] libmachine: (multinode-743795) Calling .GetSSHPort
	I1030 19:06:21.140904  416186 main.go:141] libmachine: (multinode-743795) Calling .GetSSHKeyPath
	I1030 19:06:21.141075  416186 main.go:141] libmachine: (multinode-743795) Calling .GetSSHUsername
	I1030 19:06:21.141214  416186 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795/id_rsa Username:docker}
	I1030 19:06:21.226696  416186 ssh_runner.go:195] Run: systemctl --version
	I1030 19:06:21.233304  416186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:06:21.249149  416186 kubeconfig.go:125] found "multinode-743795" server: "https://192.168.39.241:8443"
	I1030 19:06:21.249185  416186 api_server.go:166] Checking apiserver status ...
	I1030 19:06:21.249233  416186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 19:06:21.262634  416186 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1051/cgroup
	W1030 19:06:21.272465  416186 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1051/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1030 19:06:21.272502  416186 ssh_runner.go:195] Run: ls
	I1030 19:06:21.277026  416186 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I1030 19:06:21.281144  416186 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I1030 19:06:21.281168  416186 status.go:463] multinode-743795 apiserver status = Running (err=<nil>)
	I1030 19:06:21.281178  416186 status.go:176] multinode-743795 status: &{Name:multinode-743795 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1030 19:06:21.281196  416186 status.go:174] checking status of multinode-743795-m02 ...
	I1030 19:06:21.281488  416186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:06:21.281528  416186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:06:21.298400  416186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43227
	I1030 19:06:21.298892  416186 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:06:21.299374  416186 main.go:141] libmachine: Using API Version  1
	I1030 19:06:21.299394  416186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:06:21.299743  416186 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:06:21.299960  416186 main.go:141] libmachine: (multinode-743795-m02) Calling .GetState
	I1030 19:06:21.301672  416186 status.go:371] multinode-743795-m02 host status = "Running" (err=<nil>)
	I1030 19:06:21.301689  416186 host.go:66] Checking if "multinode-743795-m02" exists ...
	I1030 19:06:21.301985  416186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:06:21.302025  416186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:06:21.318826  416186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I1030 19:06:21.319399  416186 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:06:21.319917  416186 main.go:141] libmachine: Using API Version  1
	I1030 19:06:21.319941  416186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:06:21.320278  416186 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:06:21.320472  416186 main.go:141] libmachine: (multinode-743795-m02) Calling .GetIP
	I1030 19:06:21.323185  416186 main.go:141] libmachine: (multinode-743795-m02) DBG | domain multinode-743795-m02 has defined MAC address 52:54:00:c0:3b:6c in network mk-multinode-743795
	I1030 19:06:21.323628  416186 main.go:141] libmachine: (multinode-743795-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3b:6c", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:04:28 +0000 UTC Type:0 Mac:52:54:00:c0:3b:6c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-743795-m02 Clientid:01:52:54:00:c0:3b:6c}
	I1030 19:06:21.323663  416186 main.go:141] libmachine: (multinode-743795-m02) DBG | domain multinode-743795-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:c0:3b:6c in network mk-multinode-743795
	I1030 19:06:21.323837  416186 host.go:66] Checking if "multinode-743795-m02" exists ...
	I1030 19:06:21.324222  416186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:06:21.324267  416186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:06:21.339736  416186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I1030 19:06:21.340219  416186 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:06:21.340697  416186 main.go:141] libmachine: Using API Version  1
	I1030 19:06:21.340718  416186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:06:21.341031  416186 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:06:21.341211  416186 main.go:141] libmachine: (multinode-743795-m02) Calling .DriverName
	I1030 19:06:21.341389  416186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1030 19:06:21.341408  416186 main.go:141] libmachine: (multinode-743795-m02) Calling .GetSSHHostname
	I1030 19:06:21.343935  416186 main.go:141] libmachine: (multinode-743795-m02) DBG | domain multinode-743795-m02 has defined MAC address 52:54:00:c0:3b:6c in network mk-multinode-743795
	I1030 19:06:21.344331  416186 main.go:141] libmachine: (multinode-743795-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3b:6c", ip: ""} in network mk-multinode-743795: {Iface:virbr1 ExpiryTime:2024-10-30 20:04:28 +0000 UTC Type:0 Mac:52:54:00:c0:3b:6c Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-743795-m02 Clientid:01:52:54:00:c0:3b:6c}
	I1030 19:06:21.344367  416186 main.go:141] libmachine: (multinode-743795-m02) DBG | domain multinode-743795-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:c0:3b:6c in network mk-multinode-743795
	I1030 19:06:21.344488  416186 main.go:141] libmachine: (multinode-743795-m02) Calling .GetSSHPort
	I1030 19:06:21.344641  416186 main.go:141] libmachine: (multinode-743795-m02) Calling .GetSSHKeyPath
	I1030 19:06:21.344758  416186 main.go:141] libmachine: (multinode-743795-m02) Calling .GetSSHUsername
	I1030 19:06:21.344886  416186 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19883-381834/.minikube/machines/multinode-743795-m02/id_rsa Username:docker}
	I1030 19:06:21.421844  416186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 19:06:21.436750  416186 status.go:176] multinode-743795-m02 status: &{Name:multinode-743795-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1030 19:06:21.436800  416186 status.go:174] checking status of multinode-743795-m03 ...
	I1030 19:06:21.437169  416186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 19:06:21.437219  416186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 19:06:21.452811  416186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1030 19:06:21.453260  416186 main.go:141] libmachine: () Calling .GetVersion
	I1030 19:06:21.453728  416186 main.go:141] libmachine: Using API Version  1
	I1030 19:06:21.453751  416186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 19:06:21.454078  416186 main.go:141] libmachine: () Calling .GetMachineName
	I1030 19:06:21.454287  416186 main.go:141] libmachine: (multinode-743795-m03) Calling .GetState
	I1030 19:06:21.455948  416186 status.go:371] multinode-743795-m03 host status = "Stopped" (err=<nil>)
	I1030 19:06:21.455965  416186 status.go:384] host is not running, skipping remaining checks
	I1030 19:06:21.455971  416186 status.go:176] multinode-743795-m03 status: &{Name:multinode-743795-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-743795 node start m03 -v=7 --alsologtostderr: (39.292662718s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-743795 node delete m03: (1.618975715s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (202.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-743795 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1030 19:15:18.709181  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:16:20.310042  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:18:17.243756  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-743795 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m21.938619805s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-743795 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (202.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-743795
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-743795-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-743795-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.943573ms)

                                                
                                                
-- stdout --
	* [multinode-743795-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-743795-m02' is duplicated with machine name 'multinode-743795-m02' in profile 'multinode-743795'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-743795-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-743795-m03 --driver=kvm2  --container-runtime=crio: (47.818169845s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-743795
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-743795: exit status 80 (222.563054ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-743795 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-743795-m03 already exists in multinode-743795-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-743795-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.94s)

                                                
                                    
x
+
TestScheduledStopUnix (113.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-850262 --memory=2048 --driver=kvm2  --container-runtime=crio
E1030 19:23:17.245871  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-850262 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.976893608s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850262 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-850262 -n scheduled-stop-850262
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850262 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1030 19:23:55.359098  389144 retry.go:31] will retry after 70.899µs: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.360286  389144 retry.go:31] will retry after 164.18µs: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.361453  389144 retry.go:31] will retry after 174.109µs: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.362569  389144 retry.go:31] will retry after 468.641µs: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.363727  389144 retry.go:31] will retry after 558.33µs: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.364889  389144 retry.go:31] will retry after 798.045µs: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.366037  389144 retry.go:31] will retry after 575.854µs: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.367178  389144 retry.go:31] will retry after 1.692527ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.368991  389144 retry.go:31] will retry after 2.904718ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.372221  389144 retry.go:31] will retry after 2.038844ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.374466  389144 retry.go:31] will retry after 3.007626ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.377769  389144 retry.go:31] will retry after 11.135241ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.390014  389144 retry.go:31] will retry after 16.298075ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.407270  389144 retry.go:31] will retry after 20.784684ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.428539  389144 retry.go:31] will retry after 15.560212ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
I1030 19:23:55.444742  389144 retry.go:31] will retry after 47.938717ms: open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/scheduled-stop-850262/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850262 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-850262 -n scheduled-stop-850262
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-850262
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850262 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1030 19:25:01.787929  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-850262
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-850262: exit status 7 (67.400808ms)

                                                
                                                
-- stdout --
	scheduled-stop-850262
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-850262 -n scheduled-stop-850262
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-850262 -n scheduled-stop-850262: exit status 7 (67.139314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-850262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-850262
--- PASS: TestScheduledStopUnix (113.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (165.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1030 19:30:18.709626  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.624059011 start -p running-upgrade-453471 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.624059011 start -p running-upgrade-453471 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.993230021s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-453471 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-453471 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m45.698368559s)
helpers_test.go:175: Cleaning up "running-upgrade-453471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-453471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-453471: (1.557963184s)
--- PASS: TestRunningBinaryUpgrade (165.50s)

                                                
                                    
x
+
TestPause/serial/Start (107.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-651891 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-651891 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m47.428354533s)
--- PASS: TestPause/serial/Start (107.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-820435 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-820435 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (92.375766ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-820435] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (119.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-820435 --driver=kvm2  --container-runtime=crio
E1030 19:25:18.709709  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-820435 --driver=kvm2  --container-runtime=crio: (1m58.962152287s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-820435 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (119.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-534248 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-534248 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (120.421171ms)

                                                
                                                
-- stdout --
	* [false-534248] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19883
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 19:26:34.982893  425470 out.go:345] Setting OutFile to fd 1 ...
	I1030 19:26:34.983204  425470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:26:34.983225  425470 out.go:358] Setting ErrFile to fd 2...
	I1030 19:26:34.983237  425470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 19:26:34.983757  425470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19883-381834/.minikube/bin
	I1030 19:26:34.984470  425470 out.go:352] Setting JSON to false
	I1030 19:26:34.985805  425470 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11338,"bootTime":1730305057,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 19:26:34.985956  425470 start.go:139] virtualization: kvm guest
	I1030 19:26:34.988866  425470 out.go:177] * [false-534248] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 19:26:34.990468  425470 notify.go:220] Checking for updates...
	I1030 19:26:34.990471  425470 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 19:26:34.991859  425470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 19:26:34.993256  425470 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19883-381834/kubeconfig
	I1030 19:26:34.994733  425470 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19883-381834/.minikube
	I1030 19:26:34.996176  425470 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 19:26:34.997733  425470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 19:26:34.999764  425470 config.go:182] Loaded profile config "NoKubernetes-820435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:26:34.999928  425470 config.go:182] Loaded profile config "force-systemd-env-736675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:26:35.000061  425470 config.go:182] Loaded profile config "pause-651891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1030 19:26:35.000189  425470 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 19:26:35.036943  425470 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 19:26:35.038125  425470 start.go:297] selected driver: kvm2
	I1030 19:26:35.038140  425470 start.go:901] validating driver "kvm2" against <nil>
	I1030 19:26:35.038156  425470 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 19:26:35.040473  425470 out.go:201] 
	W1030 19:26:35.041628  425470 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1030 19:26:35.042851  425470 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-534248 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-534248" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.68:8443
name: pause-651891
contexts:
- context:
cluster: pause-651891
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-651891
name: pause-651891
current-context: pause-651891
kind: Config
preferences: {}
users:
- name: pause-651891
user:
client-certificate: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/pause-651891/client.crt
client-key: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/pause-651891/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-534248

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-534248"

                                                
                                                
----------------------- debugLogs end: false-534248 [took: 3.002002107s] --------------------------------
helpers_test.go:175: Cleaning up "false-534248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-534248
--- PASS: TestNetworkPlugins/group/false (3.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-651891 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-651891 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.7258908s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (64.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (177.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1880072912 start -p stopped-upgrade-531202 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1880072912 start -p stopped-upgrade-531202 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m28.021554215s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1880072912 -p stopped-upgrade-531202 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1880072912 -p stopped-upgrade-531202 stop: (11.471856884s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-531202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-531202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.480034518s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (177.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-820435 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-820435 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.070844504s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-820435 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-820435 status -o json: exit status 2 (264.004454ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-820435","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-820435
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-820435: (1.070222189s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-820435 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-820435 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.420803334s)
--- PASS: TestNoKubernetes/serial/Start (52.42s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-651891 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-651891 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-651891 --output=json --layout=cluster: exit status 2 (248.201996ms)

                                                
                                                
-- stdout --
	{"Name":"pause-651891","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-651891","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-651891 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-651891 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-651891 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-651891 --alsologtostderr -v=5: (1.155693177s)
--- PASS: TestPause/serial/DeletePaused (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-820435 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-820435 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.23995ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.487389103s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.548319052s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-820435
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-820435: (1.326303392s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-820435 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-820435 --driver=kvm2  --container-runtime=crio: (41.096506643s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-820435 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-820435 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.616322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-531202
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m13.488984166s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m25.923470009s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-534248 "pgrep -a kubelet"
I1030 19:31:55.398504  389144 config.go:182] Loaded profile config "auto-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-534248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-534248 replace --force -f testdata/netcat-deployment.yaml: (2.203148454s)
I1030 19:31:57.615677  389144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1030 19:31:58.471125  389144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qps74" [c8f9a478-6c1d-4fb0-a247-302584c939e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qps74" [c8f9a478-6c1d-4fb0-a247-302584c939e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004463339s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-534248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m25.43617736s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rbwnp" [63a159ba-b132-437d-bb95-5ce523dabbb6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003629666s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-534248 "pgrep -a kubelet"
I1030 19:32:39.942664  389144 config.go:182] Loaded profile config "kindnet-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-534248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7grj6" [604ee4a2-1dd4-4f51-a30d-925512a53d42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7grj6" [604ee4a2-1dd4-4f51-a30d-925512a53d42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004661413s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-534248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (103.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m43.317340183s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (103.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (132.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1030 19:33:17.243806  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/functional-683899/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m12.449331167s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (132.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fm2zr" [6e93769d-0c1b-4d0c-9680-df2893134aa1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.114075608s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-534248 "pgrep -a kubelet"
I1030 19:33:58.948161  389144 config.go:182] Loaded profile config "calico-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-534248 replace --force -f testdata/netcat-deployment.yaml
I1030 19:33:59.725035  389144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1030 19:33:59.733028  389144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m42vl" [0f0e0f96-f953-4b54-b8d7-eb7bb81f04a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m42vl" [0f0e0f96-f953-4b54-b8d7-eb7bb81f04a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005383648s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-534248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.55091876s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-534248 "pgrep -a kubelet"
I1030 19:34:45.548590  389144 config.go:182] Loaded profile config "custom-flannel-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-534248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2km82" [a775ffd9-28ea-47a6-8ca3-1d8d95e2713c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2km82" [a775ffd9-28ea-47a6-8ca3-1d8d95e2713c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004033465s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-534248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-534248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (56.009347638s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-534248 "pgrep -a kubelet"
I1030 19:35:21.639948  389144 config.go:182] Loaded profile config "enable-default-cni-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-534248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5nddt" [2c1453fd-5729-492a-91c2-8168da8d6e9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5nddt" [2c1453fd-5729-492a-91c2-8168da8d6e9c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003838938s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-534248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6xjkd" [fde6f345-e38f-4ce8-9e63-d8ef43c65ee8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005626328s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-960512 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-960512 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m54.810817031s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-534248 "pgrep -a kubelet"
I1030 19:35:50.234028  389144 config.go:182] Loaded profile config "flannel-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-534248 replace --force -f testdata/netcat-deployment.yaml
I1030 19:35:50.498883  389144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-llwwl" [0e5c8bfe-5209-4a09-9b0c-b710cc77b8bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-llwwl" [0e5c8bfe-5209-4a09-9b0c-b710cc77b8bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004272987s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-534248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-534248 "pgrep -a kubelet"
I1030 19:36:10.418634  389144 config.go:182] Loaded profile config "bridge-534248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-534248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-534248 replace --force -f testdata/netcat-deployment.yaml: (1.008024732s)
I1030 19:36:11.442685  389144 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8sr7m" [16bb9c7e-29c1-4044-87b8-382093018f95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8sr7m" [16bb9c7e-29c1-4044-87b8-382093018f95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004286816s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-042402 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-042402 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m37.739944161s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-534248 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-534248 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148025869s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1030 19:36:37.618225  389144 retry.go:31] will retry after 1.295390078s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-534248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-534248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-768989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1030 19:36:57.604210  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:57.610655  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:57.622038  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:57.643878  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:57.685381  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:57.767543  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:57.929610  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:58.251796  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:36:58.893521  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:00.175227  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:02.737052  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:07.859212  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:18.101054  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:33.737308  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:33.743788  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:33.755217  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:33.776574  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:33.818010  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:33.899485  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:34.061158  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:34.382410  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:35.024488  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:36.306621  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:38.582927  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:38.868004  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:37:43.990078  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-768989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (57.338591088s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-960512 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b4c64bf2-4452-4ab5-b98b-1dd7d09f7593] Pending
helpers_test.go:344: "busybox" [b4c64bf2-4452-4ab5-b98b-1dd7d09f7593] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b4c64bf2-4452-4ab5-b98b-1dd7d09f7593] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.00448405s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-960512 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82360fc1-575a-4dc5-86b6-54892c216d65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1030 19:37:54.232035  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/kindnet-534248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [82360fc1-575a-4dc5-86b6-54892c216d65] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.004662515s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-042402 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3f3425ca-e2ca-442a-942f-e3a08c0277a2] Pending
helpers_test.go:344: "busybox" [3f3425ca-e2ca-442a-942f-e3a08c0277a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3f3425ca-e2ca-442a-942f-e3a08c0277a2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.004810951s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-042402 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-960512 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-960512 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-768989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-768989 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-042402 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-042402 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (652.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-960512 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-960512 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m52.675045542s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-960512 -n no-preload-960512
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (652.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-768989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-768989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m28.579000845s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-768989 -n default-k8s-diff-port-768989
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (621.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-042402 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1030 19:40:43.958685  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:43.965067  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:43.976443  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:43.997803  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:44.039226  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:44.120750  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:44.282608  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:44.604411  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:45.246261  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:46.528159  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:49.089547  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:40:54.211178  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:02.831842  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:04.452887  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:07.711684  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/custom-flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:11.430836  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:11.437221  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:11.448523  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:11.469864  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:11.511231  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:11.592696  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:11.754216  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:12.076100  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:12.718195  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:14.000467  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:16.562147  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:21.684404  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:24.934209  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:31.926579  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:36.374816  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/calico-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:41.790044  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/addons-819803/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:43.794236  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:52.408397  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:41:57.604520  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/auto-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 19:42:05.896054  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-042402 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m20.91873946s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042402 -n embed-certs-042402
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (621.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-516975 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-516975 --alsologtostderr -v=3: (1.372827555s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-516975 -n old-k8s-version-516975: exit status 7 (65.559038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-516975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-467894 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1030 20:05:21.856432  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/enable-default-cni-534248/client.crt: no such file or directory" logger="UnhandledError"
E1030 20:05:43.959609  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/flannel-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-467894 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (47.069775688s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-467894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-467894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.02618526s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-467894 --alsologtostderr -v=3
E1030 20:06:11.429184  389144 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/bridge-534248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-467894 --alsologtostderr -v=3: (10.447410449s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-467894 -n newest-cni-467894
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-467894 -n newest-cni-467894: exit status 7 (76.784081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-467894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-467894 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-467894 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (36.581308802s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-467894 -n newest-cni-467894
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-467894 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-467894 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-467894 --alsologtostderr -v=1: (1.792478286s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-467894 -n newest-cni-467894
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-467894 -n newest-cni-467894: exit status 2 (377.119796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-467894 -n newest-cni-467894
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-467894 -n newest-cni-467894: exit status 2 (409.919198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-467894 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-467894 -n newest-cni-467894
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-467894 -n newest-cni-467894
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.50s)

                                                
                                    

Test skip (39/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.41
267 TestNetworkPlugins/group/cilium 3.68
290 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-819803 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-534248 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-534248" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.68:8443
name: pause-651891
contexts:
- context:
cluster: pause-651891
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-651891
name: pause-651891
current-context: pause-651891
kind: Config
preferences: {}
users:
- name: pause-651891
user:
client-certificate: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/pause-651891/client.crt
client-key: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/pause-651891/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-534248

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-534248"

                                                
                                                
----------------------- debugLogs end: kubenet-534248 [took: 3.247576023s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-534248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-534248
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-534248 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-534248" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.27:8443
name: force-systemd-env-736675
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19883-381834/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.68:8443
name: pause-651891
contexts:
- context:
cluster: force-systemd-env-736675
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-env-736675
name: force-systemd-env-736675
- context:
cluster: pause-651891
extensions:
- extension:
last-update: Wed, 30 Oct 2024 19:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-651891
name: pause-651891
current-context: force-systemd-env-736675
kind: Config
preferences: {}
users:
- name: force-systemd-env-736675
user:
client-certificate: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/force-systemd-env-736675/client.crt
client-key: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/force-systemd-env-736675/client.key
- name: pause-651891
user:
client-certificate: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/pause-651891/client.crt
client-key: /home/jenkins/minikube-integration/19883-381834/.minikube/profiles/pause-651891/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-534248

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-534248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-534248"

                                                
                                                
----------------------- debugLogs end: cilium-534248 [took: 3.525065853s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-534248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-534248
--- SKIP: TestNetworkPlugins/group/cilium (3.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-113740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-113740
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard